spreg.GM_Lag

class spreg.GM_Lag(y, x, yend=None, q=None, w=None, w_lags=1, lag_q=True, robust=None, gwk=None, sig2n_k=False, spat_diag=False, vm=False, name_y=None, name_x=None, name_yend=None, name_q=None, name_w=None, name_gwk=None, name_ds=None)[source]

Spatial two stage least squares (S2SLS) with results and diagnostics; Anselin (1988) [Ans88]

Parameters
yarray

nx1 array for dependent variable

xarray

Two dimensional array with n rows and one column for each independent (exogenous) variable, excluding the constant

yendarray

Two dimensional array with n rows and one column for each endogenous variable

qarray

Two dimensional array with n rows and one column for each external exogenous variable to use as instruments (note: this should not contain any variables from x); cannot be used in combination with h

wpysal W object

Spatial weights object

w_lagsinteger

Orders of W to include as instruments for the spatially lagged dependent variable. For example, w_lags=1, then instruments are WX; if w_lags=2, then WX, WWX; and so on.

lag_qboolean

If True, then include spatial lags of the additional instruments (q).

robuststring

If ‘white’, then a White consistent estimator of the variance-covariance matrix is given. If ‘hac’, then a HAC consistent estimator of the variance-covariance matrix is given. Default set to None.

gwkpysal W object

Kernel spatial weights needed for HAC estimation. Note: matrix must have ones along the main diagonal.

sig2n_kboolean

If True, then use n-k to estimate sigma^2. If False, use n.

spat_diagboolean

If True, then compute Anselin-Kelejian test

vmboolean

If True, include variance-covariance matrix in summary results

name_ystring

Name of dependent variable for use in output

name_xlist of strings

Names of independent variables for use in output

name_yendlist of strings

Names of endogenous variables for use in output

name_qlist of strings

Names of instruments for use in output

name_wstring

Name of weights matrix for use in output

name_gwkstring

Name of kernel weights matrix for use in output

name_dsstring

Name of dataset for use in output

Examples

We first need to import the needed modules, namely numpy to convert the data we read into arrays that spreg understands and pysal to perform all the analysis. Since we will need some tests for our model, we also import the diagnostics module.

>>> import numpy as np
>>> import libpysal
>>> import spreg

Open data on Columbus neighborhood crime (49 areas) using libpysal.io.open(). This is the DBF associated with the Columbus shapefile. Note that libpysal.io.open() also reads data in CSV format; since the actual class requires data to be passed in as numpy arrays, the user can read their data in using any method.

>>> db = libpysal.io.open(libpysal.examples.get_path("columbus.dbf"),'r')

Extract the HOVAL column (home value) from the DBF file and make it the dependent variable for the regression. Note that PySAL requires this to be an numpy array of shape (n, 1) as opposed to the also common shape of (n, ) that other packages accept.

>>> y = np.array(db.by_col("HOVAL"))
>>> y = np.reshape(y, (49,1))

Extract INC (income) and CRIME (crime rates) vectors from the DBF to be used as independent variables in the regression. Note that PySAL requires this to be an nxj numpy array, where j is the number of independent variables (not including a constant). By default this model adds a vector of ones to the independent variables passed in, but this can be overridden by passing constant=False.

>>> X = []
>>> X.append(db.by_col("INC"))
>>> X.append(db.by_col("CRIME"))
>>> X = np.array(X).T

Since we want to run a spatial error model, we need to specify the spatial weights matrix that includes the spatial configuration of the observations into the error component of the model. To do that, we can open an already existing gal file or create a new one. In this case, we will create one from columbus.shp.

>>> w = libpysal.weights.Rook.from_shapefile(libpysal.examples.get_path("columbus.shp"))

Unless there is a good reason not to do it, the weights have to be row-standardized so every row of the matrix sums to one. Among other things, this allows to interpret the spatial lag of a variable as the average value of the neighboring observations. In PySAL, this can be easily performed in the following way:

>>> w.transform = 'r'

This class runs a lag model, which means that includes the spatial lag of the dependent variable on the right-hand side of the equation. If we want to have the names of the variables printed in the output summary, we will have to pass them in as well, although this is optional. The default most basic model to be run would be:

>>> from spreg import GM_Lag
>>> np.set_printoptions(suppress=True) #prevent scientific format
>>> reg=GM_Lag(y, X, w=w, w_lags=2, name_x=['inc', 'crime'], name_y='hoval', name_ds='columbus')
>>> reg.betas
array([[45.30170561],
       [ 0.62088862],
       [-0.48072345],
       [ 0.02836221]])

Once the model is run, we can obtain the standard error of the coefficient estimates by calling the diagnostics module:

>>> spreg.se_betas(reg)
array([17.91278862,  0.52486082,  0.1822815 ,  0.31740089])

But we can also run models that incorporates corrected standard errors following the White procedure. For that, we will have to include the optional parameter robust='white':

>>> reg=GM_Lag(y, X, w=w, w_lags=2, robust='white', name_x=['inc', 'crime'], name_y='hoval', name_ds='columbus')
>>> reg.betas
array([[45.30170561],
       [ 0.62088862],
       [-0.48072345],
       [ 0.02836221]])

And we can access the standard errors from the model object:

>>> reg.std_err
array([20.47077481,  0.50613931,  0.20138425,  0.38028295])

The class is flexible enough to accomodate a spatial lag model that, besides the spatial lag of the dependent variable, includes other non-spatial endogenous regressors. As an example, we will assume that CRIME is actually endogenous and we decide to instrument for it with DISCBD (distance to the CBD). We reload the X including INC only and define CRIME as endogenous and DISCBD as instrument:

>>> X = np.array(db.by_col("INC"))
>>> X = np.reshape(X, (49,1))
>>> yd = np.array(db.by_col("CRIME"))
>>> yd = np.reshape(yd, (49,1))
>>> q = np.array(db.by_col("DISCBD"))
>>> q = np.reshape(q, (49,1))

And we can run the model again:

>>> reg=GM_Lag(y, X, w=w, yend=yd, q=q, w_lags=2, name_x=['inc'], name_y='hoval', name_yend=['crime'], name_q=['discbd'], name_ds='columbus')
>>> reg.betas
array([[100.79359082],
       [ -0.50215501],
       [ -1.14881711],
       [ -0.38235022]])

Once the model is run, we can obtain the standard error of the coefficient estimates by calling the diagnostics module:

>>> spreg.se_betas(reg)
array([53.0829123 ,  1.02511494,  0.57589064,  0.59891744])
Attributes
summarystring

Summary of regression results and diagnostics (note: use in conjunction with the print command)

betasarray

kx1 array of estimated coefficients

uarray

nx1 array of residuals

e_predarray

nx1 array of residuals (using reduced form)

predyarray

nx1 array of predicted y values

predy_earray

nx1 array of predicted y values (using reduced form)

ninteger

Number of observations

kinteger

Number of variables for which coefficients are estimated (including the constant)

kstarinteger

Number of endogenous variables.

yarray

nx1 array for dependent variable

xarray

Two dimensional array with n rows and one column for each independent (exogenous) variable, including the constant

yendarray

Two dimensional array with n rows and one column for each endogenous variable

qarray

Two dimensional array with n rows and one column for each external exogenous variable used as instruments

zarray

nxk array of variables (combination of x and yend)

harray

nxl array of instruments (combination of x and q)

robuststring

Adjustment for robust standard errors

mean_yfloat

Mean of dependent variable

std_yfloat

Standard deviation of dependent variable

vmarray

Variance covariance matrix (kxk)

pr2float

Pseudo R squared (squared correlation between y and ypred)

pr2_efloat

Pseudo R squared (squared correlation between y and ypred_e (using reduced form))

utufloat

Sum of squared residuals

sig2float

Sigma squared used in computations

std_errarray

1xk array of standard errors of the betas

z_statlist of tuples

z statistic; each tuple contains the pair (statistic, p-value), where each is a float

ak_testtuple

Anselin-Kelejian test; tuple contains the pair (statistic, p-value)

name_ystring

Name of dependent variable for use in output

name_xlist of strings

Names of independent variables for use in output

name_yendlist of strings

Names of endogenous variables for use in output

name_zlist of strings

Names of exogenous and endogenous variables for use in output

name_qlist of strings

Names of external instruments

name_hlist of strings

Names of all instruments used in ouput

name_wstring

Name of weights matrix for use in output

name_gwkstring

Name of kernel weights matrix for use in output

name_dsstring

Name of dataset for use in output

titlestring

Name of the regression method used

sig2nfloat

Sigma squared (computed with n in the denominator)

sig2n_kfloat

Sigma squared (computed with n-k in the denominator)

hthfloat

\(H'H\)

hthifloat

\((H'H)^{-1}\)

varbarray

\((Z'H (H'H)^{-1} H'Z)^{-1}\)

zthhthiarray

\(Z'H(H'H)^{-1}\)

pfora1a2array

n(zthhthi)’varb

__init__(y, x, yend=None, q=None, w=None, w_lags=1, lag_q=True, robust=None, gwk=None, sig2n_k=False, spat_diag=False, vm=False, name_y=None, name_x=None, name_yend=None, name_q=None, name_w=None, name_gwk=None, name_ds=None)[source]

Initialize self. See help(type(self)) for accurate signature.

Methods

__init__(y, x[, yend, q, w, w_lags, lag_q, …])

Initialize self.

Attributes

mean_y

pfora1a2

sig2n

sig2n_k

std_y

utu

vm

property mean_y
property pfora1a2
property sig2n
property sig2n_k
property std_y
property utu
property vm