Monday, February 10, 2020

Credit Risk Analysis - Application of Logistic Regression Essay

Credit Risk Analysis - Application of Logistic Regression - Essay Example The scales of different variables are set as most of the variables are set as â€Å"nominal†, which however, is not correct. Out of a mix of 20 independent variables, 7 variables are referred as â€Å"scale† variables, 4 variables are labeled as â€Å"ordinal† and the rest of variables are considered as â€Å"nominal†. 2. In applying binary logistic regression, â€Å"Forwards LR† method is used to run the data because this method takes variables one by one in the analysis and in the last step, present the most statistically significant and important variables which are helpful in the analysis. 3. Hosmer and Lemeshow Test, is selected to find out the relationship between the observed values and the expected values. With the help of SPSS, following tables are generated, since â€Å"Forward LR† method is used and due to this method, 11 steps are taken by this method, therefore, in order to maintain the conciseness of the report, the values of al l previous 10 steps have been omitted from the tables. Only values pertaining to step 11 are taken in the analysis. All the tables and their interpretation are presented from next page. Classification Tablea,b Observed Predicted CreditRisk Percentage Correct Bad Good Step 0 CreditRisk Bad 0 300 .0 Good 0 700 100.0 Overall Percentage 70.0 a. Constant is included in the model. b. The cut value is .500 The 2 x 2 table that has been presented above, tallies the incorrect and correct estimations for the constants of null model. Rows represent the actual (observed) value of dependent whereas columns represent the predicted values. The overall percentage is taken as 100%. In a perfect model, the cases will be in the diagonal. If there is heteroscedasticity in logistic model, then for both the rows, the percentage will nearly be same. This phenomenon is not found here, the model is predicting â€Å"Good† cases but any â€Å"Bad† cases are not predicted. While, the overall perc entages are predicted exactly having overall percentage of 70% which is moderately good. It is to be noted by the researcher that the category which is most frequent for all â€Å"Good† cases produces the same and correct percentage of 70%. Variables in the Equation B S.E. Wald df Sig. Exp(B) Step 0 Constant .847 .069 150.762 1 .000 2.333 In the above mentioned SPSS results, for all the independent variables, the coefficients are 0. The findings significantly reveal that in this case, the null hypothesis should be rejected. Omnibus Tests of Model Coefficients Chi-square Df Sig. Step 11 Step 5.276 1 .022 Block 300.781 31 .000 Model 300.781 31 .000 The purpose of chi-square goodness of fit test is to investigate whether the step of judging null hypothesis is justified or not. In this case, the step has been taken from constant-only model to independent model. The step of adding variables or variable in this scenario can be justified if the values are less than 0.05. If the step would be to exclude variables from equations of this model, than it would be justified by taking the cutoff point as greater than 0.10. Since the sig. values are less 0.05, therefore null hypothesis can be rejected and the model is statistically significant. Model Summary Step -2 Log likelihood Cox & Snell R Square Nagelkerke R Square 11 920.948a .260 .368 a.† Estimation terminated at iteration number 5 because parameter

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.