How Sensed AI Used Experiential Assessment to Identify High-Potential Full Stack Engineers

Introduction
Hiring full stack engineers, especially at the internship level, often comes with a fundamental challenge. While many candidates demonstrate basic coding knowledge, it is difficult to assess how effectively they can apply that knowledge in real-world development environments. Traditional hiring methods such as resumes and coding tests rarely provide visibility into how candidates work, collaborate, and improve over time.
To address this gap, Sensed AI adopted an experiential assessment approach designed to evaluate candidates through practical work simulation. This method focused on understanding not just technical knowledge, but also execution, collaboration, and workflow behavior in a structured development environment.
How the Experiential Hiring Process Was Structured
The hiring process was designed as a multi-stage experiential evaluation, combining both individual and team-based assessments.
A total of 22 candidates entered the process, out of which 13 candidates were fully assessed through the system. From this group, only 2 candidates were shortlisted based on performance across multiple evaluation dimensions. This reflects a highly selective process where approximately 80 percent of candidates were filtered out based on objective performance data.
The process followed a structured flow. Candidates first applied through the platform, after which they completed an initial assessment designed to evaluate their foundational knowledge. Those who progressed were then assigned to team-based project simulations, where they worked on real-world tasks in a collaborative environment.
This approach ensured that candidates were evaluated not just individually, but also in the context of how they function within a team.
Assessment Approach and Evaluation Model
The assessment model was built around a two-stage evaluation framework.
The first stage consisted of a preliminary aptitude quiz that tested core frontend and programming fundamentals, including HTML, CSS, and JavaScript. This stage established a technical baseline and helped identify candidates with the required foundational knowledge.
The second stage involved a team-based project simulation, where candidates collaborated on a shared application using tools such as React, TypeScript, and GitHub. Each participant worked on assigned tasks, submitted pull requests, and went through both AI-driven and manual code reviews.
Evaluation was based on a weighted model that combined multiple dimensions. Code intelligence carried the highest weight at 45 percent, followed by project execution and engagement at 25 percent. Technical competency contributed 20 percent, while professional behavior and collaboration accounted for 10 percent.
This structure ensured that both technical ability and real-world working behavior were taken into account when evaluating candidates.
Understanding Candidate Workflows
One of the most valuable aspects of this assessment was the ability to analyze candidate workflows in detail.
Instead of focusing only on final submissions, the system tracked how candidates approached their work throughout the process. This included how frequently they committed code, how they structured their pull requests, and how they responded to feedback and review cycles.
Across the assessment, a total of 22 pull requests were reviewed and 22 tickets were completed. These metrics provided insight into participation levels and contribution patterns within the team environment.
Candidates who performed well demonstrated a consistent and iterative approach. They worked incrementally, maintained clarity in their code, and actively participated in the project. In contrast, candidates with lower performance showed limited engagement, fewer contributions, and less structured workflows.
What the Data Revealed About Candidate Performance
The aggregated performance data provided a clear picture of candidate strengths and gaps.
The average project execution score was 82, indicating that most candidates were able to complete assigned tasks when given structure and guidance. However, this was contrasted by lower scores in other areas.
The average code intelligence score was 60, and the average technical competency score was 63, suggesting that while candidates could complete tasks, the quality and depth of their implementations were moderate.
The most notable gap was observed in professional behavior, with an average score of 49. This highlighted challenges in areas such as collaboration, communication, and consistency within a team environment.
At an individual level, shortlisted candidates demonstrated significantly stronger performance across all dimensions. They showed higher code intelligence, better technical competency, and stronger engagement in project execution. They also completed more tickets and participated more actively in the review process, indicating a more disciplined and structured approach to development.
However, even among top performers, certain workflow-related metrics such as commit efficiency, pull request structure, and merge efficiency remained moderate. This suggests that while candidates had strong technical potential, there is still a need for improvement in real-world development practices.
Key Insights for Technical Hiring
The findings from this assessment highlight several important insights for technical hiring.
Candidates often demonstrate the ability to complete tasks but may lack depth in code quality and optimization. This indicates that traditional evaluations that focus only on output may overestimate a candidate’s readiness.
Workflow behavior is a critical differentiator. Candidates who show consistent engagement, structured iteration, and clear contribution patterns tend to perform better overall.
Collaboration and professional behavior remain key gaps for many candidates. Even when technical skills are sufficient, the ability to work effectively within a team significantly impacts performance.
The combination of AI-driven evaluation and real project simulation provides a more accurate representation of candidate capabilities compared to traditional methods.
Final Thoughts
The Sensed AI experiential assessment demonstrates how hiring can be significantly improved by focusing on real-world execution rather than isolated testing.
By combining structured quizzes, team-based project simulations, and AI-assisted workflow analysis, this approach provides deeper visibility into how candidates actually perform. It enables hiring teams to move beyond assumptions and make decisions based on measurable and observable behavior.
As the demand for skilled engineers continues to grow, adopting experiential assessment models can help organizations identify candidates who are not only technically capable but also prepared to work effectively in real development environments.