- Help Center
- FaceCode
- Scheduling interviews on FaceCode
-
Assessments: Recruiters
- Getting started
- Account settings
- Admin management
- Creating tests automatically
- Creating tests manually
- Test settings
- Sections and question pooling
- Libraries
- Multiple Choice Questions (MCQs)
- Programming
- Full stack
- SQL
- Data science
- Machine Learning (ML)
- DevOps
- Python project questions
- Selenium
- Java Project
- C# project questions
- Subjective
- Approximate
- Diagram
- File upload
- Invites
- Reports
- Billing
- HackerEarth Reports: Admins, Tests, and Teams
-
FaceCode
-
Assessments: ATS integrations
-
Assessments: Product updates
-
Assessments: Best practices
-
SSO
-
Upskilling
-
Assessments: Candidates
-
Problem setting for HackerEarth
-
Campus
-
Hiring challenges
-
Frequently Asked Questions (FAQs)
Default evaluation criteria
There are various parameters based on which the evaluation of the candidate can be done. Each parameter is scored on a scale of 5.
The parameters are as follows:
1. Communication skills (ability to understand the question and explain solution)
-
- Poor
- Could not explain the solution properly
- Good
- Explained the solution properly
- Excellent
2. Ability to debug the code
-
- Could not debug code
- Could only debug a few errors
- Could debug only the standard errors
- Could debug most of the errors
- Wrote a perfect code
3. Quality of code
-
- Poor
- Unformatted code
- Basic concepts are covered
- Reusable and readable
- Perfect code
4. Solution and Execution
- Unoptimized code and could not solve the question properly
- Partly-optimized code and partially solved the question
- Basically-optimized code and solved the question covering the basic concepts
- Well-optimized code and solved the whole question
- Perfectly-optimized code and the solution prints the required output
The scores can be added in real-time but they are available later on as well where the automated summary is presented.