注册 登录  
 加关注
   显示下一条  |  关闭
温馨提示!由于新浪微博认证机制调整,您的新浪微博帐号绑定已过期,请重新绑定!立即重新绑定新浪微博》  |  关闭

凯恩斯的博客

做一个无畏的行者

 
 
 

日志

 
 
关于我

Permanent Head Damage

网易考拉推荐

What statistical analysis should I use?(2)  

2012-09-09 05:40:08|  分类: 默认分类 |  标签: |举报 |字号 订阅

  下载LOFTER 我的照片书  |

Factorial logistic regression

A factorial logistic regression is used when you have two or more categorical independent variables but a dichotomous dependent variable.  For example, using thehsb2 data file we will use female as our dependent variable, because it is the only dichotomous (0/1) variable in our data set; certainly not because it common practice to use gender as an outcome variable.  We will use type of program (prog) and school type (schtyp) as our predictor variables.  Because prog is a categorical variable (it has three levels), we need to create dummy codes for it.  The use of i.prog does this.  You can use the logit command if you want to see the regression coefficients or the logistic command if you want to see the odds ratios.

logit female i.prog##schtyp  Iteration 0:   log likelihood = -137.81834   Iteration 1:   log likelihood = -136.25886   Iteration 2:   log likelihood = -136.24502   Iteration 3:   log likelihood = -136.24501    Logistic regression                               Number of obs   =        200                                                   LR chi2(5)      =       3.15                                                   Prob > chi2     =     0.6774 Log likelihood = -136.24501                       Pseudo R2       =     0.0114  ------------------------------------------------------------------------------       female |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval] -------------+----------------------------------------------------------------         prog |           2  |   .3245866   .3910782     0.83   0.407    -.4419125    1.091086           3  |   .2183474   .4319116     0.51   0.613    -.6281839    1.064879              |     2.schtyp |   1.660724   1.141326     1.46   0.146    -.5762344    3.897683              |  prog#schtyp |         2 2  |  -1.934018   1.232722    -1.57   0.117    -4.350108    .4820729         3 2  |  -1.827778   1.840256    -0.99   0.321    -5.434614    1.779057              |        _cons |  -.0512933   .3203616    -0.16   0.873    -.6791906     .576604 ------------------------------------------------------------------------------

The results indicate that the overall model is not statistically significant (LR chi2 = 3.15, p = 0.6774).  Furthermore, none of the coefficients are statistically significant either.  We can use the test command to get the test of the overall effect of prog as shown below.  This shows that the overall effect of prog is not statistically significant.

test 2.prog 3.prog   ( 1)  [female]2.prog = 0  ( 2)  [female]3.prog = 0             chi2(  2) =    0.69          Prob > chi2 =    0.7086

Likewise, we can use the testparm command to get the test of the overall effect of the prog by schtyp interaction, as shown below.  This shows that the overall effect of this interaction is not statistically significant.

testparm prog#schtyp   ( 1)  [female]2.prog#2.schtyp = 0  ( 2)  [female]3.prog#2.schtyp = 0             chi2(  2) =    2.47          Prob > chi2 =    0.2902

If you prefer, you could use the logistic command to see the results as odds ratios, as shown below.

logistic female i.prog##schtyp  Logistic regression                               Number of obs   =        200                                                   LR chi2(5)      =       3.15                                                   Prob > chi2     =     0.6774 Log likelihood = -136.24501                       Pseudo R2       =     0.0114  ------------------------------------------------------------------------------       female | Odds Ratio   Std. Err.      z    P>|z|     [95% Conf. Interval] -------------+----------------------------------------------------------------         prog |           2  |   1.383459   .5410405     0.83   0.407     .6428059    2.977505           3  |   1.244019   .5373063     0.51   0.613     .5335599    2.900487              |     2.schtyp |   5.263121   6.006939     1.46   0.146     .5620107    49.28811              |  prog#schtyp |         2 2  |   .1445662   .1782099    -1.57   0.117     .0129054    1.619428         3 2  |   .1607704   .2958586    -0.99   0.321     .0043629    5.924268 ------------------------------------------------------------------------------

Correlation

A correlation is useful when you want to see the linear relationship between two (or more) normally distributed interval variables.  For example, using the hsb2 data filewe can run a correlation between two continuous variables, read and write

corr read write
(obs=200)               |     read    write -------------+------------------         read |   1.0000        write |   0.5968   1.0000

In the second example, we will run a correlation between a dichotomous variable, female, and a continuous variable, write. Although it is assumed that the variables are interval and normally distributed, we can include dummy variables when performing correlations.

corr female write
(obs=200)               |   female    write -------------+------------------       female |   1.0000        write |   0.2565   1.0000

In the first example above, we see that the correlation between read and write is 0.5968.  By squaring the correlation and then multiplying by 100, you can determine what percentage of the variability is shared.  Let's round 0.5968 to be 0.6, which when squared would be .36, multiplied by 100 would be 36%.  Hence read shares about 36% of its variability with write.  In the output for the second example, we can see the correlation between write and female is 0.2565.  Squaring this number yields .06579225, meaning that female shares approximately 6.5% of its variability with write.

See also

Simple linear regression

Simple linear regression allows us to look at the linear relationship between one normally distributed interval predictor and one normally distributed interval outcome variable.  For example, using the hsb2 data file, say we wish to look at the relationship between writing scores (write) and reading scores (read); in other words, predicting write from read.  

regress write read  ------------------------------------------------------------------------------        write |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval] -------------+----------------------------------------------------------------         read |   .5517051   .0527178    10.47   0.000     .4477446    .6556656        _cons |   23.95944   2.805744     8.54   0.000     18.42647    29.49242 ------------------------------------------------------------------------------

We see that the relationship between write and read is positive (.5517051) and based on the t-value (10.47) and p-value (0.000), we would conclude this relationship is statistically significant.  Hence, we would say there is a statistically significant positive linear relationship between reading and writing.

See also

Non-parametric correlation

A Spearman correlation is used when one or both of the variables are not assumed to be normally distributed and interval (but are assumed to be ordinal). The values of the variables are converted in ranks and then correlated.  In our example, we will look for a relationship between read and write.  We will not assume that both of these variables are normal and interval .  

spearman read write
Number of obs =     200 Spearman's rho =       0.6167  Test of Ho: read and write are independent     Prob > |t| =       0.0000

The results suggest that the relationship between read and write (rho = 0.6167, p = 0.000) is statistically significant. 

Simple logistic regression

Logistic regression assumes that the outcome variable is binary (i.e., coded as 0 and 1).  We have only one variable in the hsb2 data file that is coded 0 and 1, and that is female.  We understand that female is a silly outcome variable (it would make more sense to use it as a predictor variable), but we can use female as the outcome variable to illustrate how the code for this command is structured and how to interpret the output.  The first variable listed after the logistic (or logit) command is the outcome (or dependent) variable, and all of the rest of the variables are predictor (or independent) variables.  You can use the logit command if you want to see the regression coefficients or the logistic command if you want to see the odds ratios.  In our example, female will be the outcome variable, and read will be the predictor variable.  As with OLS regression, the predictor variables must be either dichotomous or continuous; they cannot be categorical.

logistic female read  Logit estimates                                   Number of obs   =        200                                                   LR chi2(1)      =       0.56                                                   Prob > chi2     =     0.4527 Log likelihood = -137.53641                       Pseudo R2       =     0.0020  ------------------------------------------------------------------------------       female | Odds Ratio   Std. Err.      z    P>|z|     [95% Conf. Interval] -------------+----------------------------------------------------------------         read |   .9896176   .0137732    -0.75   0.453     .9629875    1.016984 ------------------------------------------------------------------------------  logit female read
Iteration 0:   log likelihood = -137.81834 Iteration 1:   log likelihood = -137.53642 Iteration 2:   log likelihood = -137.53641  Logit estimates                                   Number of obs   =        200                                                   LR chi2(1)      =       0.56                                                   Prob > chi2     =     0.4527 Log likelihood = -137.53641                       Pseudo R2       =     0.0020  ------------------------------------------------------------------------------       female |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval] -------------+----------------------------------------------------------------         read |  -.0104367   .0139177    -0.75   0.453    -.0377148    .0168415        _cons |   .7260875   .7419612     0.98   0.328    -.7281297    2.180305 ------------------------------------------------------------------------------

The results indicate that reading score (read) is not a statistically significant predictor of gender (i.e., being female), z = -0.75, p = 0.453.  Likewise, the test of the overall model is not statistically significant, LR chi-squared 0.56, p = 0.4527.  

See also

Multiple regression

Multiple regression is very similar to simple regression, except that in multiple regression you have more than one predictor variable in the equation.  For example, using the hsb2 data file we will predict writing score from gender (female), reading, math, science and social studies (socst) scores.

regress write female read math science socst
Source       |       SS       df       MS              Number of obs =     200 -------------+------------------------------           F(  5,   194) =   58.60        Model |  10756.9244     5  2151.38488           Prob > F      =  0.0000     Residual |   7121.9506   194  36.7110855           R-squared     =  0.6017 -------------+------------------------------           Adj R-squared =  0.5914        Total |   17878.875   199   89.843593           Root MSE      =   6.059  ------------------------------------------------------------------------------        write |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval] -------------+----------------------------------------------------------------       female |   5.492502   .8754227     6.27   0.000     3.765935     7.21907         read |   .1254123   .0649598     1.93   0.055    -.0027059    .2535304         math |   .2380748   .0671266     3.55   0.000     .1056832    .3704665      science |   .2419382   .0606997     3.99   0.000     .1222221    .3616542        socst |   .2292644   .0528361     4.34   0.000     .1250575    .3334713        _cons |   6.138759   2.808423     2.19   0.030      .599798    11.67772 ------------------------------------------------------------------------------

The results indicate that the overall model is statistically significant (F = 58.60, p = 0.0000).  Furthermore, all of the predictor variables are statistically significant except for read.  

See also

Analysis of covariance

Analysis of covariance is like ANOVA, except in addition to the categorical predictors you also have continuous predictors as well.  For example, the one way ANOVA example used write as the dependent variable and prog as the independent variable.  Let's add read as a continuous variable to this model, as shown below.

anova write prog c.read  Number of obs =     200     R-squared     =  0.3925                             Root MSE      = 7.44408     Adj R-squared =  0.3832      Source |  Partial SS    df       MS           F     Prob > F -----------+----------------------------------------------------      Model |  7017.68123     3  2339.22708      42.21     0.0000            |       prog |  650.259965     2  325.129983       5.87     0.0034       read |  3841.98338     1  3841.98338      69.33     0.0000            |   Residual |  10861.1938   196  55.4142539     ----------+----------------------------------------------------      Total |   17878.875   199   89.843593 

   The results indicate that even after adjusting for reading score (read), writing scores still significantly differ by program type (prog) F = 5.87, p = 0.0034.

See also

Multiple logistic regression

Multiple logistic regression is like simple logistic regression, except that there are two or more predictors.  The predictors can be interval variables or dummy variables, but cannot be categorical variables.  If you have categorical predictors, they should be coded into one or more dummy variables. We have only one variable in our data set that is coded 0 and 1, and that is female.  We understand that female is a silly outcome variable (it would make more sense to use it as a predictor variable), but we can use female as the outcome variable to illustrate how the code for this command is structured and how to interpret the output.  The first variable listed after the logistic (or logit) command is the outcome (or dependent) variable, and all of the rest of the variables are predictor (or independent) variables.  You can use the logit command if you want to see the regression coefficients or the logistic command if you want to see the odds ratios.  In our example, female will be the outcome variable, and read and write will be the predictor variables. 

logistic female read write  Logit estimates                                   Number of obs   =        200                                                   LR chi2(2)      =      27.82                                                   Prob > chi2     =     0.0000 Log likelihood = -123.90902                       Pseudo R2       =     0.1009  ------------------------------------------------------------------------------       female | Odds Ratio   Std. Err.      z    P>|z|     [95% Conf. Interval] -------------+----------------------------------------------------------------         read |   .9314488   .0182578    -3.62   0.000     .8963428    .9679298        write |   1.112231   .0246282     4.80   0.000     1.064993    1.161564 ------------------------------------------------------------------------------

These results show that both read and write are significant predictors of female.

See also

Discriminant analysis

Discriminant analysis is used when you have one or more normally distributed interval independent variables and a categorical dependent variable.  It is a multivariate technique that considers the latent dimensions in the independent variables for predicting group membership in the categorical dependent variable.  For example, using the hsb2 data file, say we wish to use readwrite and math scores to predict the type of program a student belongs to (prog).  For this analysis, you need to first download the daoneway program that performs this test. You can download daoneway from within Stata by typing findit daoneway (see How can I used the findit command to search for programs and get additional help? for more information about using findit).

You can then perform the discriminant function analysis like this.

daoneway read write math, by(prog)
One-way Disciminant Function Analysis  Observations = 200 Variables    = 3 Groups       = 3                   Pct of   Cum  Canonical  After  Wilks'  Fcn Eigenvalue Variance  Pct     Corr      Fcn  Lambda  Chi-square  df  P-value                                          |   0  0.73398    60.619     6   0.0000    1    0.3563   98.74  98.74    0.5125  |   1  0.99548     0.888     2   0.6414    2    0.0045    1.26 100.00    0.0672  |  Unstandardized canonical discriminant function coefficients           func1    func2  read   0.0292  -0.0439 write   0.0383   0.1370  math   0.0703  -0.0793 _cons  -7.2509  -0.7635  Standardized canonical discriminant function coefficients           func1    func2  read   0.2729  -0.4098 write   0.3311   1.1834  math   0.5816  -0.6557  Canonical discriminant structure matrix           func1    func2  read   0.7785  -0.1841 write   0.7753   0.6303  math   0.9129  -0.2725  Group means on canonical discriminant functions            func1    func2 prog-1  -0.3120   0.1190 prog-2   0.5359  -0.0197 prog-3  -0.8445  -0.0658

Clearly, the Stata output for this procedure is lengthy, and it is beyond the scope of this page to explain all of it.  However, the main point is that two canonical variables are identified by the analysis, the first of which seems to be more related to program type than the second.  For more information, see this page ondiscriminant function analysis.

See also

One-way MANOVA

MANOVA (multivariate analysis of variance) is like ANOVA, except that there are two or more dependent variables. In a one-way MANOVA, there is one categorical independent variable and two or more dependent variables. For example, using the hsb2 data file, say we wish to examine the differences in readwrite and mathbroken down by program type (prog). For this analysis, you can use the manova command and then perform the analysis like this.

manova read write math = prog, category(prog)
Number of obs =     200                  W = Wilks' lambda      L = Lawley-Hotelling trace                  P = Pillai's trace     R = Roy's largest root      Source |  Statistic     df   F(df1,    df2) =   F   Prob>F -----------+--------------------------------------------------       prog | W   0.7340      2     6.0   390.0    10.87 0.0000 e            | P   0.2672            6.0   392.0    10.08 0.0000 a            | L   0.3608            6.0   388.0    11.67 0.0000 a            | R   0.3563            3.0   196.0    23.28 0.0000 u            |--------------------------------------------------   Residual |               197 -----------+--------------------------------------------------      Total |               199 --------------------------------------------------------------               e = exact, a = approximate, u = upper bound on F

This command produces three different test statistics that are used to evaluate the statistical significance of the relationship between the independent variable and the outcome variables.  According to all three criteria, the students in the different programs differ in their joint distribution of readwrite and math.

See also

Multivariate multiple regression

Multivariate multiple regression is used when you have two or more dependent variables that are to be predicted from two or more predictor variables.  In our example, we will predict write and read from femalemathscience and social studies (socst) scores.  

mvreg write read = female math science socst
Equation          Obs  Parms        RMSE    "R-sq"          F        P ---------------------------------------------------------------------- write             200      5    6.101191    0.5940   71.32457   0.0000 read              200      5    6.679383    0.5841    68.4741   0.0000  ------------------------------------------------------------------------------              |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval] -------------+---------------------------------------------------------------- write        |       female |   5.428215   .8808853     6.16   0.000      3.69093    7.165501         math |   .2801611   .0639308     4.38   0.000     .1540766    .4062456      science |   .2786543   .0580452     4.80   0.000     .1641773    .3931313        socst |   .2681117    .049195     5.45   0.000     .1710892    .3651343        _cons |   6.568924   2.819079     2.33   0.021     1.009124    12.12872 -------------+---------------------------------------------------------------- read         |       female |   -.512606   .9643644    -0.53   0.596    -2.414529    1.389317         math |   .3355829   .0699893     4.79   0.000     .1975497    .4736161      science |   .2927632    .063546     4.61   0.000     .1674376    .4180889        socst |   .3097572   .0538571     5.75   0.000     .2035401    .4159744        _cons |   3.430005   3.086236     1.11   0.268    -2.656682    9.516691 ------------------------------------------------------------------------------

Many researchers familiar with traditional multivariate analysis may not recognize the tests above. They do not see Wilks' Lambda, Pillai's Trace or the Hotelling-Lawley Trace statistics, the statistics with which they are familiar. It is possible to obtain these statistics using the mvtest command written by David E. Moore of the University of Cincinnati.  UCLA updated this command to work with Stata 6 and above.  You can download mvtest from within Stata by typing findit mvtest (seeHow can I used the findit command to search for programs and get additional help? for more information about using findit).

Now that we have downloaded it, we can use the command shown below.  

mvtest female  
                      MULTIVARIATE TESTS OF SIGNIFICANCE   Multivariate Test Criteria and Exact F Statistics for the Hypothesis of no Overall "female" Effect(s)                                               S=1    M=0    N=96  Test                          Value          F       Num DF     Den DF   Pr > F Wilks' Lambda              0.83011470    19.8513          2   194.0000   0.0000 Pillai's Trace             0.16988530    19.8513          2   194.0000   0.0000 Hotelling-Lawley Trace     0.20465280    19.8513          2   194.0000   0.0000

These results show that female has a significant relationship with the joint distribution of write and read.  The mvtest command could then be repeated for each of the other predictor variables.

See also

Canonical correlation

Canonical correlation is a multivariate technique used to examine the relationship between two groups of variables.  For each set of variables, it creates latent variables and looks at the relationships among the latent variables. It assumes that all variables in the model are interval and normally distributed.  Stata requires that each of the two groups of variables be enclosed in parentheses.  There need not be an equal number of variables in the two groups.

canon (read write) (math science)  Linear combinations for canonical correlation 1        Number of obs =     200 ------------------------------------------------------------------------------              |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval] -------------+---------------------------------------------------------------- u            |         read |   .0632613    .007111     8.90   0.000     .0492386     .077284        write |   .0492492    .007692     6.40   0.000     .0340809    .0644174 -------------+---------------------------------------------------------------- v            |         math |   .0669827   .0080473     8.32   0.000     .0511138    .0828515      science |   .0482406   .0076145     6.34   0.000     .0332252    .0632561 ------------------------------------------------------------------------------                                          (Std. Errors estimated conditionally) Canonical correlations:   0.7728  0.0235

The output above shows the linear combinations corresponding to the first canonical correlation.  At the bottom of the output are the two canonical correlations.  These results indicate that the first canonical correlation is .7728.  You will note that Stata is brief and may not provide you with all of the information that you may want.  Several programs have been developed to provide more information regarding the analysis.  You can download this family of programs by typing findit cancor (see How can I used the findit command to search for programs and get additional help? for more information about using findit).

Because the output from the cancor command is lengthy, we will use the cantest command to obtain the eigenvalues, F-tests and associated p-values that we want.  Note that you do not have to specify a model with either the cancor or the cantest commands if they are issued after the canon command.

cantest
Canon    Can Corr   Likelihood     Approx    Corr     Squared      Ratio            F   df1       df2    Pr > F 7728      .59728     0.4025      56.4706     4   392.000    0.0000 0235      .00055     0.9994       0.1087     1   197.000    0.7420  Eigenvalue   Proportion  Cumulative     1.4831       0.9996      0.9996     0.0006       0.0004      1.0000

The F-test in this output tests the hypothesis that the first canonical correlation is equal to zero.  Clearly, F = 56.4706 is statistically significant.  However, the second canonical correlation of .0235 is not statistically significantly different from zero (F = 0.1087, p = 0.7420).

See also

Factor analysis

Factor analysis is a form of exploratory multivariate analysis that is used to either reduce the number of variables in a model or to detect relationships among variables.  All variables involved in the factor analysis need to be continuous and are assumed to be normally distributed.  The goal of the analysis is to try to identify factors which underlie the variables.  There may be fewer factors than variables, but there may not be more factors than variables.  For our example, let's suppose that we think that there are some common factors underlying the various test scores.  We will first use the principal components method of extraction (by using the pcoption) and then the principal components factor method of extraction (by using the pcf option).  This parallels the output produced by SAS and SPSS. 

factor read write math science socst, pc (obs=200)              (principal components; 5 components retained) Component    Eigenvalue     Difference    Proportion    Cumulative ------------------------------------------------------------------      1        3.38082         2.82344      0.6762         0.6762      2        0.55738         0.15059      0.1115         0.7876      3        0.40679         0.05062      0.0814         0.8690      4        0.35617         0.05733      0.0712         0.9402      5        0.29884               .      0.0598         1.0000                 Eigenvectors     Variable |      1          2          3          4          5 -------------+------------------------------------------------------         read |   0.46642   -0.02728   -0.53127   -0.02058   -0.70642        write |   0.44839    0.20755    0.80642    0.05575   -0.32007         math |   0.45878   -0.26090   -0.00060   -0.78004    0.33615      science |   0.43558   -0.61089   -0.00695    0.58948    0.29924        socst |   0.42567    0.71758   -0.25958    0.20132    0.44269

Now let's rerun the factor analysis with a principal component factors extraction method and retain factors with eigenvalues of .5 or greater.  Then we will use a varimax rotation on the solution.

factor read write math science socst, pcf mineigen(.5) (obs=200)              (principal component factors; 2 factors retained)   Factor     Eigenvalue     Difference    Proportion    Cumulative ------------------------------------------------------------------      1        3.38082         2.82344      0.6762         0.6762      2        0.55738         0.15059      0.1115         0.7876      3        0.40679         0.05062      0.0814         0.8690      4        0.35617         0.05733      0.0712         0.9402      5        0.29884               .      0.0598         1.0000                 Factor Loadings     Variable |      1          2    Uniqueness -------------+--------------------------------         read |   0.85760   -0.02037    0.26410        write |   0.82445    0.15495    0.29627         math |   0.84355   -0.19478    0.25048      science |   0.80091   -0.45608    0.15054        socst |   0.78268    0.53573    0.10041
rotate, varimax              (varimax rotation)                Rotated Factor Loadings     Variable |      1          2    Uniqueness -------------+--------------------------------         read |   0.64808    0.56204    0.26410        write |   0.50558    0.66942    0.29627         math |   0.75506    0.42357    0.25048      science |   0.89934    0.20159    0.15054        socst |   0.21844    0.92297    0.10041 

Note that by default, Stata will retain all factors with positive eigenvalues; hence the use of the mineigen option or the factors(#) option.  The factors(#) option does not specify the number of solutions to retain, but rather the largest number of solutions to retain.  From the table of factor loadings, we can see that all five of the test scores load onto the first factor, while all five tend to load not so heavily on the second factor.  Uniqueness (which is the opposite of commonality) is the proportion of variance of the variable (i.e., read) that is not accounted for by all of the factors taken together, and a very high uniqueness can indicate that a variable may not belong with any of the factors.  Factor loadings are often rotated in an attempt to make them more interpretable.  Stata performs both varimax and promax rotations.

rotate, varimax
(varimax rotation)                Rotated Factor Loadings     Variable |      1          2    Uniqueness -------------+--------------------------------         read |   0.62238    0.51992    0.34233        write |   0.53933    0.54228    0.41505         math |   0.65110    0.45408    0.36988      science |   0.64835    0.37324    0.44033        socst |   0.44265    0.58091    0.46660


The purpose of rotating the factors is to get the variables to load either very high or very low on each factor.  In this example, because all of the variables loaded onto factor 1 and not on factor 2, the rotation did not aid in the interpretation.  Instead, it made the results even more difficult to interpret.

To obtain a scree plot of the eigenvalues, you can use the greigen command.  We have included a reference line on the y-axis at one to aid in determining how many factors should be retained.

greigen, yline(1)
What statistical analysis should I use?(2) - 凯恩斯 - 凯恩斯的博客

See also

  评论这张
 
阅读(214)| 评论(0)
推荐 转载

历史上的今天

评论

<#--最新日志,群博日志--> <#--推荐日志--> <#--引用记录--> <#--博主推荐--> <#--随机阅读--> <#--首页推荐--> <#--历史上的今天--> <#--被推荐日志--> <#--上一篇,下一篇--> <#-- 热度 --> <#-- 网易新闻广告 --> <#--右边模块结构--> <#--评论模块结构--> <#--引用模块结构--> <#--博主发起的投票-->
 
 
 
 
 
 
 
 
 
 
 
 
 
 

页脚

网易公司版权所有 ©1997-2017