Shopping Cart
Your Cart is Empty
There was an error with PayPalClick here to try again
CelebrateThank you for your business!You should be receiving an order confirmation from Paypal shortly.Exit Shopping Cart

Statistical Consulting


view:  full / summary

Hypothesis Testing

Posted on February 28, 2016 at 11:43 PM Comments comments (405)
The Steps to Performing Hypothesis Testing

  1. Write the original claim and identify whether it is the null hypothesis or the alternative hypothesis.
  2. Write the null and alternative hypothesis. Use the alternative hypothesis to identify the type of test.
  3. Write down all information from the problem.
  4. Find the critical value using the tables
  5. Compute the test statistic
  6. Make a decision to reject or fail to reject the null hypothesis. A picture showing the critical value and test statistic may be useful.
  7. Write the conclusion. 

    Null Hypothesis (H0)

         Statement of zero or no change. If the original claim includes equality (<=, =, or >=), it is the null hypothesis. If the original claim does not include equality (<, not equal, >) then the null hypothesis is the complement of the original claim. The null hypothesis always includes the equal sign. The decision is based on the null hypothesis.

    Alternative Hypothesis (Ha or H1)

         Statement which is true if the null hypothesis is false. The type of test (left, right, or two-tail) is based on the alternative hypothesis. 

    Left Tailed Test

    H: parameter < value
    Notice the inequality points to the left
    Decision Rule: Reject H if t.s. < c.v. 

    Right Tailed Test

    H: parameter > value
    Notice the inequality points to the right
    Decision Rule: Reject H if t.s. > c.v. 

    Two Tailed Test

    H: parameter not equal value
    Another way to write not equal is < or >
    Notice the inequality points to both sides
    Decision Rule: Reject H if t.s. < c.v. (left) or t.s. > c.v. (right)

    Type I error

    Rejecting the null hypothesis when it is true (saying false when true). Usually the more serious error.

    Type II error

    Failing to reject the null hypothesis when it is false (saying true when false).

    Probability of committing a Type I error.


    Probability of committing a Type II error.


    Test statistic

    Sample statistic used to decide whether to reject or fail to reject the null hypothesis.

    Critical region

    Set of all values which would cause us to reject H0

    Critical value(s)

    The value(s) which separate the critical region from the non-critical region. The critical values are determined independently of the sample statistics.

    Significance level ( alpha )

    The probability of rejecting the null hypothesis when it is true. alpha = 0.05 and alpha = 0.01 are common. If no level of significance is given, use alpha = 0.05. The level of significance is the complement of the level of confidence in estimation.


    A statement based upon the null hypothesis. It is either "reject the null hypothesis" or "fail to reject the null hypothesis". We will never accept the null hypothesis. 


    A statement which indicates the level of evidence (sufficient or insufficient), at what level of significance, and whether the original claim is rejected (null) or supported (alternative).

      Type I and Type II Error

      Posted on February 28, 2016 at 11:30 PM Comments comments (27)
      Type I and II errors

           There are two kinds of errors that can be made in significance testing: (1) a true null hypothesis can be incorrectly rejected and (2) a false null hypothesis can fail to be rejected. The former error is called a Type I error and the latter error is called a Type II error. These two types of errors are defined in the table.
      Statistical Decision True State of the Null Hypothesis H True H False Reject H Type I error Correct Do not Reject H Correct Type II error
           The probability of a Type I error is designated by the Greek letter alpha (α) and is called the Type I error rate; the probability of a Type II error (the Type II error rate) is designated by the Greek letter beta (ß) . A Type II error is only an error in the sense that an opportunity to reject the null hypothesis correctly was lost. It is not an error in the sense that an incorrect conclusion was drawn since no conclusion is drawn when the null hypothesis is not rejected. 

           A Type I error, on the other hand, is an error in every sense of the word. A conclusion is drawn that the null hypothesis is false when, in fact, it is true. Therefore, Type I errors are generally considered more serious than Type II errors. The probability of a Type I error (α) is called the significance level and is set by the experimenter. There is a tradeoff between Type I and Type II errors. The more an experimenter protects himself or herself against Type I errors by choosing a low level, the greater the chance of a Type II error. Requiring very strong evidence to reject the null hypothesis makes it very unlikely that a true null hypothesis will be rejected. However, it increases the chance that a false null hypothesis will not be rejected, thus lowering power. The Type I error rate is almost always set at .05 or at .01, the latter being more conservative since it requires stronger evidence to reject the null hypothesis at the .01 level then at the .05 level.
           A type I error occurs when one rejects the null hypothesis when it is true. The probability of a type I error is the level of significance of the test of hypothesis, and is denoted by *alpha*. Usually a one-tailed test hypothesis is is used when one talks about type I error. 


           If the cholesterol level of healthy men is normally distributed with a mean of 180 and a standard deviation of 20, and men with cholesterol levels over 225 are diagnosed as not healthy, what is the probability of a type one error?  z=(225-180)/20=2.25; the corresponding tail area is .0122, which is the probability of a type I error.

           If the cholesterol level of healthy men is normally distributed with a mean of 180 and a standard deviation of 20, at what level (in excess of 180) should men be diagnosed as not healthy if you want the probability of a type one error to be 2%?  2% in the tail corresponds to a z-score of 2.05; 2.05 × 20 = 41; 180 + 41 = 221.
           A type II error occurs when one rejects the alternative hypothesis (fails to reject the null hypothesis) when the alternative hypothesis is true.

            The probability of a type II error is denoted by *beta*. One cannot evaluate the probability of a type II error when the alternative hypothesis is of the form µ > 180, but often the alternative hypothesis is a competing hypothesis of the form: the mean of the alternative population is 300 with a standard deviation of 30, in which case one can calculate the probability of a type II error. 


            If men predisposed to heart disease have a mean cholesterol level of 300 with a standard deviation of 30, but only men with a cholesterol level over 225 are diagnosed as predisposed to heart disease, what is the probability of a type II error (the null hypothesis is that a person is not predisposed to heart disease).  z=(225-300)/30=-2.5 which corresponds to a tail area of .0062, which is the probability of a type II error (*beta*). 

            If men predisposed to heart disease have a mean cholesterol level of 300 with a standard deviation of 30, above what cholesterol level should you diagnose men as predisposed to heart disease if you want the probability of a type II error to be 1%? (The null hypothesis is that a person is not predisposed to heart disease.)  1% in the tail corresponds to a z-score of 2.33 (or -2.33); -2.33 × 30 = -70; 300 - 70 = 230.

      Surviving the Dissertation Process

      Posted on February 28, 2016 at 9:59 PM Comments comments (83)
      Getting Started, Surviving, and Completing Your Dissertation: Moving Beyond the 'ABD' Status

      By Brian Hunter, M.A.

      Have you ever found yourself spending a Saturday or any day doing everything else you can think of except working on your dissertation? 

           Many, if not most, graduate students have a difficult time getting started, persevering, and finishing the dissertation process.  You are not alone!  There are many obvious reasons for delay such as the dissertation process is new, getting access to the articles and your major professor, time pressures, financial pressures and learning new skills/techniques such as sampling methods or the dreadedstatistics.   You are not alone in these challenges, but the single biggest obstacle is to dissertation process and completion is in your mind.  Yes, the biggest problem actually lies with you.

      What makes getting started and continuing on the dissertation process seem so difficult? 

           First, your dissertation does involve far more research than you probably have ever done before. Remember, by the time you begin your dissertation, you have already written many essays, reports, and conference presentations.   A dissertation is really a compilation of seminar papers that are linked through conceptual unity.  That means you have already done most or all of the work in many classes and the objective is to bring all that work together into one unified dissertation.  So, the work is not unfamiliar to you.  So, you may be asking yourself why does it seem so difficult again.  Well, completing the dissertation process is largely based on overcoming the difficulties through perseverance.  In other words, do not giving up and keep trying!  This may seem over simplified and perhaps it is, since challenges still come up that stop your resolve and perseverance to move forward.

      Why do I feel so many different things when it comes to writing my dissertation? 

           Emotional responses to the challenges in the dissertation process can vary such as feelings of anxiety, being overwhelmed, feeling burned out and frustrated.  Frustration!  Frustration! Frustration!  There are also many extreme highs and severe lows in the process that can make you feel like you are on a roller coaster.  Guess what?  All these responses are normal!  Experiencing emotions while doing anything is normal!  Very few, if any, people engage in any professional or educational activity devoid of emotion.  So, emotional responses only become a problem when it stops you from progressing and persevering in the dissertation process. 

      How do I survive?

           This is the 20 million dollar question or 100 million, given inflation, but there are actually many things you can proactively do to survive.  The first thing is to look at what is stopping you and getting you stuck in the process.  In other word, what are the barriers and obstacles you are experience like interactions with certain professors or committee members, revisions, major changes, perceived negative feedback and delays?  

           Let us begin by taking a look at how to use and view your department chair or advisor.  These chairs or advisors are your primary contact in the dissertation process, so all news of your progress, regress, success, or immediate failure comes directly from them.  This can create strong emotions around meeting with and using your advisor regularly and appropriately.  Remember, do not hate the messenger!!! You should seek out your advisor’s candidness, critique, expertise, and trust as these are invaluable to your educational and professional development.  Try to build a relationship of cooperation, mutual respect, openness and trust.  This sounds easy, but we all have different personalities and dispositions. 

           Your advisor is not in an adversarial role with you by trying to make the dissertation process difficult.  Instead, they are a main source of support for you in achieving your academic success.  You cannot control what your advisor does or how they behave, but you can take a look at how you are responding to them and the dissertation process.  For example, when difficult news arrives from your advisor, committee, or research site there are many common responses form students:

      ·         Moping and pouting about it for a week;
      ·         Being distressed, angry and offended;
      ·         Immediate responding with irate and ill-conceived replies;
      ·         Taking a deep breath;
      ·         Recognizing the value of academic critique;
      ·         Calmly review changes and deficiencies in a cooperative manner;
      ·         Maintaining emotional control.

      How do you respond?  

           Your response to the news and progress on your dissertation not only impacts you, but also impacts your future progress and the people with whom you are working. 

           Further paths to take to survival are networking with other doctoral students.  Going through this process with other students helps to give you a perspective as well as support on your situation.  You can also gain invaluable advice and experience about dealing with advisors and committees from students further along in the dissertation process.
      How do I stay motivated and finish?

                  Given my experience working with students who have completed the dissertation process, I have seen some common patterns to staying motivated.  These are some of my suggestions:  
      o   Distill your dissertation down to one sentence.  This should be from the purpose of the paper.

            Say this sentence to yourself each day that you are working on the paper or are trying to get started working.  Post it on your computer or phone screen saver!
      o   Keep writing, even when you feel stuck.  

            When I wrote this type of paper I would continue to write by telling myself in the text what was needed, like a list of things to get for the paper or by jumping to the next section.  Continuing writing would keep my mind flowing and staying on task.  Going to other sections of the paper allowed me to continue to make progress.  
      o   Do random thoughts about your life keep popping in your head? 

            Thoughts such asI need to go grocery shopping, I have to get my oil changed, what about that wedding, there a hurricane coming this way and so on. Write them down on a notepad or into a word document as a list of things to remember or do when you are done writing that day.  These real life thoughts and pressures can pull you away from writing and there you are again doing everything, but your dissertation!
      o   No one ever wrote a dissertation in one day

            Give yourself realistic deadlines and understand that there may be setbacks.  Expect the process to take a year to two years or possibly longer.  Everyone is in different circumstances.  Very few people get to write a dissertation full time and not work or take care of others such as significant others, spouses and children.

            I have one final recommendation.  No matter where you are in the dissertation process, you can benefit greatly by reaching out and contacting a professional consultant.  There are many consultants, like myself, who specialize in working with you on learning how to write an introduction, understand a literature review, teach APA formatting, teach scientific writing, tutor research methods and the dreaded statistics!   Contact me of us today!


      Effect Size and Sample Size Calculations

      Posted on February 28, 2016 at 9:18 PM Comments comments (100)

      What are Effect Size, Power, and Sample Size Calculation and why do We Care?

      By Brian Hunter, M.A.

           You may have heard about these three terms and find them confusing when approaching a dissertation or project that requires you to calculate them and interpret them into the project.

      What are the three term definitions from the title?

           Effect size (ES) is a name given to a group of statistics that measure the magnitude or strength of a treatment or phenomena effect.  ES measures are the common metric of meta-analysis studies that summarize the findings from a specific area of research.  This tells us how easy it is or difficult it may be to find an effect when doing a research project.

           The power of any test of statistical significance is defined as the probability that it will correctly reject a false null hypothesis. The question becomes how much power do you want in doing your test?
      Sample Size calculation is done to ensure that enough participants or observations are gathered to ensure that the hypothesis testing has enough power to detect and true effect if it is actually present.  Sample size calculation, therefore, depends on effect size and power.

      How do we calculate Sample Size?

           First, we need to know the default alpha level, the power level expected, the effect size of the phenomena under study and the statistical procedure that will be used to test our hypothesis before calculating Sample Size.  Whew, that is a great deal of things to know.  Where do we begin?  We start by conducting what is known as a Power Analysis.

      What is a Power Analysis?

      1.  The primary purpose of power analysis is to estimate sample size. First, the researcher must specify the power level they want to achieve.  The default power level is usually .80 to .95 depending on your field of interest.

      2.  The calculations for power depend on the effect size of the phenomena under study in the population.  You can use published experiments similar to the one you will be conducting or a meta-analysis done on your topic of interest as a guide to finding or calculating for yourself the effect size.

      3.  Use the default alpha level for your field.  In behavioral sciences we use an alpha level of 0.05.

      4.  Choose what statistical test you will use to test your hypothesis.

      5.  Then, you choose you power level which can be from .80 to .95 which means you are 80% to 95% sure you have enough power to reject a false null hypothesis and prevent a Type II error.  

      Now that you have done this, what is next?

      How do we conduct a Sample Size Calculation?

            Once you know your power level, effect size, alpha level and statistical test for the hypothesis, you may use a public domain program known as G*power (Faul, Erdfelder, Lang, & Buchner, 2007).  Faul et al. (2007) developed this program at the University of Düsseldorf and have made it available to the public for free.  So, G*Power is able to compute power analyzes for many different hypothesis tests such as t tests, F tests, χ2 tests, z tests and some exact tests. G*Power can also be used to compute effect sizes and to display graphically the results of power analyzes. The program may be download for free with the given permission from the developers at
      Let’s look at an example of how to do this…..

           So, we decide that we are willing to have a power of .80 and we find from a published meta-analysis that the effect size of the phenomena we are studying is d=.30.  The effect size in the case is relatively small. We want to do a One-Way ANOVA with three groups (Treatment 1, Treatment 2, and Placebo) on the dependent variable of depression level to test our hypothesis regarding differential effects among two treatments and a placebo.

           Now, open up G*power and choose F-testsand then choose ANOVA, fixed effects, one way, omnibus, set power to .80, effect size to .30 and the number of groups to 3.  G*power does the calculation and produces two graphics you see below,  We found that we  need a total sample size of 111 to have enough power (.80) to detect an effect size of .30.  Please see Table 1 and Figure 1.

      Table 1

      F tests - ANOVA: Fixed effects, omnibus, one-way
      Analysis:    A priori: Compute required sample size
      Input:         Effect size f                               =  0.30
                         α err prob                                 =  0.05
                         Power (1-β err prob)                  =  0.80
                         Number of groups                  =  3
      Output:       Noncentrality parameter λ      =  9.9900000
                         Critical F                                =  3.0803869
                         Numerator df                         =  2
                         Denominator df                      =  108
                         Total sample size                    =  111
                         Actual power                          =  0.8034951
      Was that so difficult?  

           Once you understand what is involved and where to find those values and procedures, this whole idea of sample size estimation turns out to not be so intimidating.  See the developer of G*power instructions for use and download below. Good Luck in your research endeavors.
      Download the Short Tutorial of G*Power (PDF) written for G*Power 2 but still useful as an introduction.  For more help, see the papers about G*Power in the References section below.
           If you use G*Power for your research, then we would appreciate your including one or both of the following references (depending on what is appropriate) to the program in the papers in which you publish your results:


      Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39, 175-191. Download PDF

      Faul, F., Erdfelder, E., Buchner, A., & Lang, A.-G. (2009). Statistical power analyzes using G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods41, 1149-1160. Download PDF


      To report possible bugs, difficulties in program handling, and suggestions for future versions of G*Power please  send us an e-mail.

      Download G*Power for Windows XP, Vista, 7, and 8 (32 and 64 bit) (about 20 MB). Please make sure to choose “unpack with folders” in your unzip tool.