
About this role
Recruiterflow is an AI-first operating system for your Search & recruiting business. We offer an integrated ATS & CRM along with a host of automation features designed to optimize your Recruitment Operations. We're not just building software - we're transforming how recruiting agencies operate. Our platform streamlines hiring into a structured sales process, enabling agencies to close positions faster, engage top talent, and scale with ease. We've secured a spot among the top 5 players in the industry, and our next goal is to break into the top 3 within three years.
Position Overview:
We are looking for a Data Scientist with 3+ years of experience, with experience in Retrieval-Augmented Generation (RAG) frameworks. In this role, you will help drive cutting-edge AI solutions that enhance our recruitment technology. This is a unique opportunity to work on state-of-the-art AI features, creating impactful solutions that help us bridge recruitment gaps.
Why Join Recruiterflow?
At Recruiterflow, you’ll play a crucial role in shaping AI-powered innovations that will redefine the recruitment landscape. Your contributions will directly impact how recruiters operate, creating more efficient processes and helping us continue to revolutionize the industry.
Position Overview:
We are looking for a Data Scientist with 3+ years of experience, with experience in Retrieval-Augmented Generation (RAG) frameworks. In this role, you will help drive cutting-edge AI solutions that enhance our recruitment technology. This is a unique opportunity to work on state-of-the-art AI features, creating impactful solutions that help us bridge recruitment gaps.
Why Join Recruiterflow?
At Recruiterflow, you’ll play a crucial role in shaping AI-powered innovations that will redefine the recruitment landscape. Your contributions will directly impact how recruiters operate, creating more efficient processes and helping us continue to revolutionize the industry.
Key Responsibilities:
- Design and Development of Data Pipelines: Create and maintain robust, scalable ETL/ELT pipelines for data ingestion, transformation, and loading from diverse sources into data lakes and warehouses using AWS services like S3, Glue, EMR, and Kinesis.
- Vector Database Management: Design, implement, and manage specialized databases (vector dbs) for storing and retrieving high-dimensional vector embeddings, critical for AI/ML applications and semantic search.
- Infrastructure and Deployment with Docker & AWS: Utilize Docker for containerization of data processing jobs and applications to ensure consistency across development and production environments. Deploy and orchestrate these containerized applications using AWS services like Amazon ECR and potentially Kubernetes on EC2.
- Data Modeling and Architecture: Develop and maintain data models for both SQL (Redshift, RDS) and NoSQL (DynamoDB) databases, including schema design and performance optimization for vector databases.
- Collaboration and Support: Work closely with data scientists to transition prototypes to production systems, understand data requirements, and provide necessary infrastructure support.
- Quality Assurance and Optimisation: Implement data quality checks, monitor system performance, and troubleshoot issues in data flow to ensure data integrity, accuracy, and optimal performance.
Impact and Opportunities:
In this role, your contributions will shape the AI-driven recruitment solutions we offer to our clients. By developing and optimizing RAG frameworks, you'll enable recruiters to more efficiently identify and engage top candidates, transforming the recruitment experience and making a tangible impact on business outcomes.
Required Skills:
- Experience: 3 to 5 years
- Education: A Bachelor's or Master's degree in Computer Science, Engineering, Information Technology, or a related quantitative field.
- Cloud Platform Expertise: Extensive experience with AWS cloud services including S3, Lambda, EMR, Redshift, Glue, Kinesis, Athena, and IAM.
- Programming Languages: Proficiency in Python, SQL, for data manipulation, scripting, and application development.
- Containerization & DevOps: Hands-on experience with Docker and familiarity with CI/CD pipelines (Jenkins, GitLab) for automated testing and deployment.
- Database Knowledge: Strong understanding of data warehousing, ETL processes, and experience with various database types, including specific knowledge of VectorDBs.
- Big Data Technologies: Experience with frameworks like Apache Spark, Hadoop, and Kafka is often required for large-scale data processing.
Preferred Qualifications:
- Experience with graph databases and building knowledge graphs.
- Familiarity with semantic search techniques and dense retrieval methodologies.
- Experience with prompt engineering and few-shot learning principles.
- Knowledge of distributed computing frameworks (e.g., Ray, Dask).
- Experience with agent orchestration frameworks (ReAct, Plan-and-Execute, Reflexion, multi-agent collaboration, etc.) and evaluation of agent performance.
- Exposure to MLOps practices and tools.
Join Recruiterflow and become a part of our innovative team that’s transforming the recruitment landscape with AI-powered technologies!