BUS 308 Week 1 Statistic assignment in Excel spreadsheet Hey Ryan, I have uploaded the assignment for week 2 to this post. I have as well included addi
BUS 308 Week 1 Statistic assignment in Excel spreadsheet Hey Ryan,
I have uploaded the assignment for week 2 to this post. I have as well included additional PDFS about the lecture that may help you in the completion.
Like the week prior, I would need the assignment by Monday 2/4/19. This time may you be able to show the problem and answer in the original excel sheet that contains the original data as this was requested by my professor.
I also included his comments about the previous week assignment as he stated the final outcome was based off of incorrect data. (I gave you everything that was given to me)
Please let me know if you have any questions or issues.
I appreciate your urgency and quick responses, thank you. BUS 308 Week 2 Lecture 1
Examining Differences – overview
After reading this lecture, the student should be familiar with:
The importance of random sampling.
The meaning of statistical significance.
The basic approach to determining statistical significance.
The meaning of the null and alternate hypothesis statements.
The hypothesis testing process.
The purpose of the F-test and the T-test.
Last week we collected clues and evidence to help us answer our case question about
males and females getting equal pay for equal work. As we looked at the clues presented by the
salary and comp-ratio measures of pay, things got a bit confusing with results that did not see to
be consistent. We found, among other things, that the male and female compa-ratios were fairly
close together with the female mean being slightly larger. The salary analysis showed a different
view; here we noticed that the averages were apparently quite different with the males, on
average, earning more. Contradictory findings such as this are not all that uncommon when
examining data in the “real world.”
One issue that we could not fully address last week was how meaningful were the
differences? That is, would a different sample have results that might be completely different, or
can we be fairly sure that the observed differences are real and show up in the population as
well? This issue, often referred to as sampling error, deals with the fact that random samples
taken from a population will generally be a bit different than the actual population parameters,
but will be “close” enough to the actual values to be valuable in decision making.
This week, our journey takes us to ways to explore differences, and how significant these
differences are. Just as clues in mysteries are not all equally useful, not all differences are
equally important; and one of the best things statistics will do for us is tell us what differences
we should pay attention to and what we can safely ignore.
Side note; this is a skill that many managers could benefit from. Not all differences in
performances from one period to another are caused by intentional employee actions, some are
due to random variations that employees have no control over. Knowing which differences to
react to would make managers much more effective.
In keeping with our detective theme, this week could be considered the introduction of
the crime scene experts who help detectives interpret what the physical evidence means and how
it can relate to the crime being looked at. We are getting into the support being offered by
experts who interpret details. We need to know how to use these experts to our fullest
In general, differences exist in virtually everything we measure that is man-made or
influenced. The underlying issue in statistical analysis is that at times differences are important.
When measuring related or similar things, we have two types of differences: differences in
consistency and differences in average values. Some examples of things that should be the
“same” could be:
The time it takes to drive to work in the morning.
The quality of parts produced on the same manufacturing line.
The time it takes to write a 3-page paper in a class.
The weight of a 10-pound bag of potatoes.
All of these “should” be the same, as each relates to the same outcome. Yet, they all differ. We
all experience differences in travel time, and the time it takes to produce the same output on the
job or in school (such as a 3-page paper). Production standards all recognize that outcomes
should be measured within a range rather than a single point. For example, few of us would be
upset if a 10-pound bag of potatoes weighed 9.85 pounds or would think we were getting a great
deal if the bag weighed 10.2 pounds. We realize that it is virtually impossible for a given
number of potatoes to weigh exactly the same and we accept this as normal.
One reason for our acceptance is that we know that variation occurs. Variation is simply
the differences that occur in things that should be “the same.” If we can measure things with
enough detail, everything we do in life has variation over time. When we get up in the morning,
how long it takes to get to work, how effective we are at doing the same thing over and over, etc.
Except for physical constants, we can say that things differ and we need to recognize this. A side
note: variation exists in virtually everything we study (we have more than one language, word,
sentence, paragraph, past actions, financial transactions, etc.), but only in statistics do we bring
this idea front and center for examination.
This suggests that any population that we are interested in will consist of things that are
slightly different, even if the population contains only one “thing.” Males are not all the same,
neither are females. Manufactured parts differ in key measurements; this is the reason we have
quality control checking to make sure the differences are not too large. So, even if we measure
everything in our population we will have a mean that is accompanied by a standard deviation
(or range). Managers and professionals need to manage this variation, whether it is quantitative
(such as salary paid for similar work) or even qualitative (such as interpersonal interactions with
The second reason that we are so concerned with differences is that we rarely have all the
evidence, or all the possible measures of what we are looking for. Having this would mean we
have access to the entire population (everything we are interested in); rarely is this the case.
Generally, all decisions, analysis, research, etc. is done with samples, a selected subset of the
population. And, with any sample we are not going have all the information needed, obviously;
but we also know that each sample we take is going to differ a bit. (Remember, variation is
everywhere, including in the consistency of sample values.) If you are not sure of this, try
flipping a coin 10 times for 10 trials, do you expect or get the exact same number of heads for
each trial? Variation!
Since we are making decisions using samples, we have even more variation to consider
than simply that with the population we are looking at. Each sample will be slightly different
from its population and from others taken from the same population.
How do we make informed decisions with all this variation and our not being able to
know the “real” values of the measures we are using? This question is much like how detectives
develop the “motive” for a crime – do they know exactly how the guilty party felt/thought when
they say “he was jealous of the success the victim had.” This could be true, but it is only an
approximation of the true feelings, but it is “close enough” to say it was the reason. It is similar
with data samples, good ones are “close enough” to use the results to make decisions with. The
question we have now focuses on how do we know what the data results show?
The answer lies with statistical tests. They can use the observed variation to provide
results that let us make decisions with a known chance of being wrong! Most managers hope to
be right just over 50% of the time, a statistical decision can be correct 95% or more of the time!
Quite an improvement.
Sampling. The use of samples brings us to a distinction in summary statistics, between
descriptive and inferential statistics. With one minor exception (discussed shortly), these two
appear to be the same: means, standard deviations, etc. However, one very important distinction
exists in how we use these. Descriptive statistics, as we saw last week, describes a data set. But,
that is all they do. We cannot use them to make claims or inferences about any other larger
Making inferences or judgements about a larger population is the role of inferential
statistics and statistical tests. So, what makes descriptive statistics sound enough to become
inferential statistics? The group they were taken from! If we have a sample that is randomly
selected from the population (meaning that each member has the same chance of being selected
at the start), then we have our best chance of having a sample that accurately reflects the
population, and we can use the statistics developed from that sample to make inferences back to
the population. (How we develop a randomly selected sample is more of a research course issue,
and we will not go into these details. You are welcome to search the web for approaches.)
Random Sampling. If we are not working with a random sample, then our descriptive
statistics apply only to the group they are developed for. For example, asking all of our friends
their opinion of Facebook only tells us what our friends feel; we cannot say that their opinions
reflect all Facebook users, all Facebook users that fall in the age range of our friends, or any
other group. Our friends are not a randomly selected group of Facebook users, so they may not
be typical; and, if not typical users, cannot be considered to reflect the typical users.
If our sample is random, then we know (or strongly suspect) a few things. First, the
sample is unlikely to contain both the smallest and largest value that exists in the larger
population, so an estimate of the population variation is likely to be too small if based on the
sample. This is corrected by using a sample standard deviation formula rather than a population
formula. We will look at what this means specifically in the other lectures this week; but Excel
will do this for us easily.
Second, we know that our summary statistics are not the same as the population’s
parameter values. We are dealing with some (generally small) errors. This is where the new
statistics student often begins to be uncomfortable. How can we make good judgements if our
information is wrong? This is a reasonable question, and one that we, as data detectives, need to
be comfortable with.
The first part of the answer falls with the design of the sample, by selecting the right
sample size (how many are in the sample), we can control the relative size of the likely error.
For example, we can design a sample where the estimated error for our average salary is about
plus or minus $1,000. Does knowing that our estimates could be $1000 off change our view of
the data? If the female average was a thousand dollars more and the male salary was a thousand
dollars less, would you really change your opinion about them being different? Probably not
with the difference we see in our salary values (around 38K versus 52K). If the actual averages
were closer together, this error range might impact our conclusions, so we could select a sample
with a smaller error range. (Again, the technical details on how to do this are found in research
courses. For our statistics class, we assume we have the correct sample.)
Note, this error range is often called the margin of error. We see this most often in
opinion polls. For example, if a poll said that the percent of Americans who favored Federal
Government support for victims of natural disasters (hurricanes, floods, etc.) was 65% with a
margin of error of +/- 3%; we would say that the true proportion was somewhat between 62% to
68%, clearly a majority of the population. Where the margin of error becomes important to
know is when results are closer together, such as when support is 52% in favor versus 48%
opposed, with a margin of error of 3%. This means the actual support could be as low as 49% or
as high as 55%; meaning the results are generally too close to make a solid decision that the issue
is supported by a majority, the proverbial “too close to call.”
The second part of answering the question of how do we make good decisions introduces
the tools we will be looking at this week, decision making statistical tests that focus on
examining the size of observed differences to see if they are “meaningful” or not. The neat part
of these tools is we do not need to know what the sampling error was, as the techniques will
automatically include this impact into our results!
The statistical tools we will be looking at for the next couple of weeks all “work” due to a
couple of assumptions about the population. First, the data needs to be at the interval or ratio
level; the differences between sequential values needs to be constant (such as in temperature or
money). Additionally, the data is assumed to come from a population that is normally
distributed, the normal curve shape that we briefly looked at last week. Note that many
statisticians feel that minor deviations from these strict assumptions will not significantly impact
the outcomes of the tests.
The tools for this week and next use the same basic logic. If we take a lot of samples
from the population and graph the mean for all of them, we will get a normal curve (even if the
population is not exactly normal) distribution called the sampling distribution of the mean.
Makes sense as we are using sample means. This distribution has an overall, or grand, mean
equal to that of the population. The standard deviation equals the standard deviation of the
population divided by the square root of the population. (Let’s take this on faith for now, trust
me you do not want to see the math behind proving these. But if you do, I invite you to look it
up on the web.) Now, knowing – in theory – what the mean values will be from population
samples, we can look at how any given sample differs from what we think the population mean
is. This difference can be translated into what is essentially a z-score (although the specific
measure will vary depending upon the test we are using) that we looked at last week. With this
statistic, we can determine how likely (the probability of) getting a difference as large or larger
than we have purely by chance (sampling error from the actual population value) alone.
If we have a small likelihood of getting this large of a difference, we say that our
difference is too large to have been purely a sampling error, and we say a real difference exists or
that the mean of the population that the sample came from is not what we thought.
That is the basic logic of statistical testing. Of course, the actual process is a bit more
structured, but the logic holds: if the probability of getting our result is small (for example 4% or
0.04), we say the difference is significant. If the probability is large (for example 37% or 0.37),
then we say there is not enough evidence to say the difference is anything but a simple sampling
error difference from the actual population result.
The tools we will be adding to our bag of tricks this week will allow us to examine
differences between data sets. One set of tools, called the t-test, looks at means to see if the
observed difference is significant or merely a chance difference due mostly to sampling error
rather than a true difference in the population. Knowing if means differ is a critical issue in
examining groups and making decisions.
The other tool – the F-test for variance, does the same for the data variation between
groups. Often ignored, the consistency within groups is an important characteristic in
understanding whether groups having similar means can be said to be similar or not. For
example, if a group of English majors all took two classes together, one math and one English,
would you expect the grade distributions to be similar, or would you expect one to show a larger
range (or variation) than the other?
We will see throughout the class that consistency and differences are key elements to
understanding what the data is hiding from us, or trying to tell us – depending on how you look
at it. In either case, as detectives our job is to ferret out the information we need to answer our
Hypothesis Testing-Are Differences Meaningful
Here is where the crime scene experts come in. Detectives have found something but are
not completely sure of how to interpret it. Now the training and tools used by detectives and
analysts take over to examine what is found and make some interpretations. The process or
standard approach that we will use is called the hypothesis testing procedure. It consists of six
steps; the first four (4) set up the problem and how we will make our decisions (and are done
before we do anything with the actual data), the fifth step involves the analysis (done with
Excel), and the final and sixth step focuses on interpreting the result.
The hypothesis testing procedure is a standardized decision-making process that ensures
we make our decisions (on whether things are significantly different or not) is based on the data,
and not some other factors. Many times, our results are more conservative than individual
managerial judgements; that is, a statistical decision will call fewer things significantly different
than many managerial judgement calls. This statistical tendency is, at times, frustrating for
managers who want to show that things have changed. At other times, it is a benefit such as if
we are hoping that things, such as error rates, have not changed.
While a lot of statistical texts have slightly different versions of the hypothesis testing
procedure (fewer or more steps), they are essentially the same, and are a spinoff of the scientific
method. For this class, we will use the following six steps:
State the null and alternate hypothesis
Select a level of significance
Identify the statistical test to use
State the decision rule. Steps 1 – 4 are done before we examine the data
Perform the analysis
Interpret the result.
A hypothesis is a claim about an outcome. It comes in two forms. The first is the null
hypothesis – sometimes called the testable hypothesis, as it is the claim we perform all of our
statistical tests on. It is termed the “Null” hypothesis, shown as Ho, as it basically says “no
difference exists.” Even if we want to test for a difference, such as males and females having a
different average compa-ratio; in statistics, we test to see if they do not.
Why? It is easier to show that something differs from a fixed point than it is to show that
the difference is meaningful – I mean how can we focus on “different?” What does “different”
mean? So, we go with testing no difference. The key rule about developing a null hypothesis is
that it always contains an equal claim, this could be equal (=), equal to or less than ().
Here are some examples:
Ex 1: Question: Is the female compa-ratio mean = 1.0?
Ho: Female compa-ratio mean = 1.0.
Ex 2: Q: is the female compa-ratio mean = the male compa-ratio mean?
Ho: Female compa-ratio mean = Male compa-ratio mean.
Ex. 3: Q: Is the female compa-ratio more than the male compa-ratio? Note that this
question does not contain an equal condition. In this case, the null is the opposite of what
the question asks:
Ho: Female compa-ratio
Purchase answer to see full