Checker files

Purpose

Every Data Science and Machine Learning question is conceptually different, therefore each question requires a unique evaluation metric. The evaluation metric in this case is a checker file. This file is used to evaluate a candidate's submission. A checker file contains the following two functions: 

  1. gen_score: To generate the score after evaluation of the candidate’s submission.
  2. verify_submission: To verify the file submitted by the candidate, you must perform some checks. Some of these checkpoints include checking the file type, number of rows present, number of columns present, predictions for all IDs provided in result.format, and so on.

Programming languages supported

HackerEarth uses Python 3 to create checker files.

Structure of a checker file

Importing libraries

You must import all the required libraries. Refer to the following image for a list of the required libraries. 

image1-3

Loading the result and submission file path

The parts of this code and its description are as follows:

image26

  1. A default JSON file is loaded to the data variable from HackerEarth’s platform. This file contains the path of the test case and user submission.
  2. The testcase variable is assigned the path for the test case file.
  3. The user_submission variable is assigned the path of the candidate’s submission file.

The gen_score() function

This function takes two arguments ― actual and predicted. The actual parameter represents the truth values of the target column(s) in the test case and the predicted parameter represents the predictions of the target column(s) in a candidate's submission file. Here, the score variable contains the formula based on the evaluation metric. 

image22

Here, the descriptions of the labels are as follows:

  1. The gen_score function with two parameters as mentioned.
  2. The score is calculated based on an evaluation metric and is assigned to the score variable.
  3. The function returns the value of the score after the required calculation in the previous step.

Example 

The evaluation metrics are mostly imported from the scikit-learn library. The example is as follows;

image24

As mean_squared_error is an error-defining metric, the lower the error the better the model is.

We always assume the highest score to be 100 so we subtract the error from 100. The max function is used to normalize the score to range between 0 to 100.

The verify_submission() function

1. The ID variable represents the name of the index or ID column. You are required to enter the name of the index or ID column in place of None.

image13

Example

image4-3

2. The following line of code checks the type of the submission file submitted by the candidate such as whether it is in .csv format: 

image8

If the file is not in .csv format, then the checker raises an exception and displays the following error message:

image7

3. The fp_submission variable (labeled as 1) reads the candidate's submission and the fp_testcase variable (labeled as 2) reads the test cases of the question.

image9

4. The following line of code assigns the number of rows of the result file to the num_rows variable. 

image6

The following line of code checks whether the number of rows of the submission file is equal to the number of rows of the test case:

image12

If the number of rows is not equal, then the program raises an exception and displays the following error message:

image16

5. The candidate's submission must contain the following:
  • All the columns that are provided in the sample submission file.
  • All column names must be written in the same way as mentioned in the sample submission file. 

To check if the set difference between the submission file’s columns and the result file’s columns must be equal to zero, the following line of code is performed:

image5-1

In any case, if the submission file does not contain a column that is provided in the test case, then it raises an exception and the following error message is displayed:

image25

6. The following two lines of code set the index in the result file (labeled as 1) and candidate’s submission (labeled as 2):

image23

7. The label_cols variable extracts the names (labels) of the columns that are available in the result file and the candidate's submission. 

image18

8. You are required to determine all unique IDs that are available in the result file and the candidate's submission. All the common index values or unique IDs are stored in this variable.

image2-4

9. A candidate can miss certain index values or unique IDs. You are required to check the unique IDs that are available in test cases but are not available in the candidate's submission. This is performed by using the following code of line:
image10

Here, the not_in_test variable stores the indexes that are missing in the candidate’s submission file.
Further, if the length of not_in_test is equal to 0, that is, there are no missing index values, then the submission is accepted.

image11
If there are missing index values, then these values are stored in the following key_not_found variable.

image21

All the missing index values are displayed in the following error message:

image15

10. The following two lines of code allow you to select the predictions across the index values that are common between the result file (labeled as 1) and the candidate’s submission file (labeled as 2).

image19

11. The required predictions for all the unique IDs in the result file and the candidate's submission are initialized in the actual_values (labeled as 1) and predicted_values (labeled as 2) variables, respectively.

image14

Then, the actual values and predicted values are passed in the gen_score function. This is shown in the following line of code:

image20

12. The gen_score returns the score and it is printed up to six decimal places.

image3-3

13. The program begins with the verify_submission function (labeled as 1). If an exception is raised based on the checkpoints provided in the verify_submission function, it is displayed to the candidate (labeled as 2). This is performed by using the following lines of code:

image17

Your tasks in the checker file

Once you have understood the structure of a checker file, you can easily create your own with reference to the sample checker file. These are some important points to remember which are as follows: 

  • Enter the correct formula to generate the score of candidates.
  • Enter the correct name of the column ID in the ID variable of the verify_submission function.