The Error Report for an automated neural network data set with two classes in the Output Variable looks different than in other classification methods. On the XLMiner ribbon, from the Applying Your Model menu, select Help - Examples, and select Forecasting/Data Mining Examples to open the Boston_Housing.xlsx workbook. This data set includes 14 variables pertaining to housing prices from census tracts in the Boston area, collected by the U.S. Census Bureau. The categorical variable CAT.MEDV is derived from the MEDV variable (Median value of owner-occupied homes in $1,000s) by assigning a 1 for MEDV levels above 30 (>= 30), and a 0 for levels below 30 (<30). This example illustrates how to perform a partition on the fly.

On the XLMiner ribbon, from the Data Mining tab, select Classify - Neural Network - Automatic Network to open the Neural Network Classification (Automatic Arch.) - Step 1 of 2 dialog. Select the Data_Partition worksheet. At Output Variable, select CAT.MEDV, and from the Selected Variables list, select all remaining variables except MEDV.

Neural Network Classification - Step 1 of 3

Under Classes in the Output Variable, # Classes is automatically updated with a value of 2 when the Output Variable is selected, indicating that the Output Variable, CAT.MEDV, contains two classes, 0 and 1.

At Specify "Success" class (for List Chart), select a value from the drop-down arrow that will be the indicator of Success. In this example, use the default of 1.

At Specify initial cutoff probability for success, enter a value between 0 and 1. If the Probability of success (probability of the Output Variable = 1) is less than this value, a 0 is entered for the class value; otherwise, a 1 is entered. In this example, keep the default of 0.5.

Click Next to proceed to the Step 2 of 2 dialog containing options such as Error Tolerance and Weight Decay. For this example, use the default values for these options.

Neural Network Classification (Automatic Arch.) - Step 2 of 2 Dialog 

XLMiner V2015 provides the ability to partition a data set on the fly, which means that if the data set is not partitioned at the start of the example, it can now be partitioned on the Step 2 of 2 dialog. Select Partition Data to enable the partition options. XLMiner performs the partitioning immediately before running the neural network automatic algorithm. For information on partitioning options, see the Standard Partition section.

Click Finish to produce the NNCAuto_Output worksheet, and scroll down to the Error Report.

Neural Network Classification Output:  Error Report 

The Error Report provides the total number of errors in the classification -- % Error, % Sensitivity or positive rate, and % Specificity or negative rate -- produced by each network ID for the Training and Validation Sets. This report may be sorted by column by clicking the arrow next to each column heading.

Sensitivity and Specificity measures are unique to the Error Report when the Output Variable contains only two categories. Typically, these two categories can be labled as success and failure, where one of them is more important than the other (i.e., the success of a tumor being cancerous or benign.) Sensitivity (true positive rate) measures the percentage of actual positives that are correctly identified as positive (i.e., the proportion of people with cancer who are correctly identified as having cancer). Specificity (true negative rate) measures the percentage of failures correctly identified as failures (i.e., the proportion of people with no cancer being categorized as not having cancer). The two are calculated as in the following (displayed in the Confusion Matrix).

Sensitivity or True Positive Rate (TPR) = TP/(TP + FN)

Specificity (SPC) or True Negative Rate =TN / (FP + TN)

If we consider1 as a success, the Confusion Matrix would appear as in the following.

Confusion Matrix 

When viewing the Net ID 10, this network has one hidden layer containing 10 nodes. For this neural network, the percentage of errors in the Training Set is 3.95%, and the percentage of errors in the Validation Set is 5.45%.The percent sensitivity is 87.25 % and 89.19% for the training partition and validation partition, respectively. This means that in the Training Set, 87.25% of the records classified as positive were in fact positive, and 89.19% of the records in the Validation Set classified as positive were in fact positive.

Sensitivity and Specificity measures can vary in importance depending upon the application and goals of the application. The values for sensitivity and specificity are pretty low in this network, which could indicate that alternate parameters, a different architecture, or a different model might be in order. Declaring a tumor cancerous when it is benign could result in many unnecessary expensive and invasive tests and treatments. However, in a model where a success does not indicate a potentially fatal disease, this measure might not be viewed as important.

The percentage specificity is 87.23% for the Training Set, and 95.76% in the Validation Set. This means that 87.23% of the records in the training Set and 95.76% of the records in the Validation Set identified as negative, were in fact negative. In the case of a cancer diagnosis, we would prefer that this percentage be higher, or much closer to 100%, as it could potentially be fatal if a person with cancer was diagnosed as not having cancer.