Extending the multiple lineair regression model

25 important questions on Extending the multiple lineair regression model

Q: What is the purpose of the partial F-test?

A: To test whether a subset of variables is jointly useful in the full model.

Q: What is the complete vs. reduced model?

A:
  • Complete model: includes all predictors X₁ … Xₖ
  • Reduced model: excludes the variables we want to test (Xg+1 … Xk)

Q: What is the null hypothesis in a partial F-test?

H₀: βg+1 = βg+2 = … = βk = 0
(the variables add no useful information)
  • Higher grades + faster learning
  • Never study anything twice
  • 100% sure, 100% understanding
Discover Study Smart

Q: What is the test statistic for the partial F-test?

Formula:

Q: What df are used in the partial F-test?

  • Numerator: k − g
  • Denominator: n − (k + 1)
  • Q: When do we reject H₀ in a partial F-test?

    ANSWER:

    Q: Is the partial F-test one-sided or two-sided?

    One-sided, always right-tailed.

    Q: Why must both models (reduced + complete) be estimated?

    Because the test uses SSE_r − SSE_c, which compares how much SSE decreases when the additional variables are included.

    Q: What does “jointly significant” mean?

    The added variables together explain variation in Y (partial F-test rejects H₀).

    Q: What is strict collinearity?

    One X variable is an exact linear function of others
    (e.g., X₃ = X₂ + X₄).

    Q: What is collinearity (non-strict)?

    One X is very strongly related to a combination of the other X’s.

    Q: Why is collinearity a problem for regression?

    1. SE(Bⱼ) becomes large
    2. t-values shrink toward 0
    3. Individual significance becomes harder to find
    4. Interpretation becomes difficult

    Q: What happens to the model usefulness under collinearity?

    The model can still be very useful (large F) even if individual t-tests are insignificant.

    Q: Practical solution for collinearity?

    Remove one of the collinear variables (drop the one that adds least meaning).

    Q: How does collinearity affect SE(Bⱼ)?

    It increases SE(Bⱼ), making the coefficient unstable and less significant.

    Q: What is a higher-order regression model?

    ANSWER:

    Q: Why do higher-order terms not violate MLR assumptions?

    Each added term (x², x³, etc.) is treated as a new variable with its own coefficient.

    Q: How do you test whether X² or X³ should stay in the model?

    Perform a t-test on its coefficient (β₂ or β₃).
    If not significant → drop the term.

    Q: What is an interaction term?

    ANSWER:

    Q: Basic interaction model with two variables?

    ANSWER:

    Q: Basic interaction model with two variables?

    Answer±

    Q: Meaning of β₃ (interaction coefficient)?

    It measures how the slope of X₁ changes when X₂ increases.

    Q: How to test whether interaction terms are jointly useful?

    Use a partial F-test comparing:
    • full model (with interactions)
    • reduced model (without interactions)

    Q: Why can interaction terms cause collinearity?

    Because x₁x₂ is often correlated with x₁ and x₂.

    Q: What are the four key residual checks?

    1. Linearity
    2. Homoskedasticity
    3. Normality of errors
    4. Independence of errors

    The question on the page originate from the summary of the following study material:

    • A unique study and practice tool
    • Never study anything twice again
    • Get the grades you hope for
    • 100% sure, 100% understanding
    Remember faster, study better. Scientifically proven.
    Trustpilot Logo