EVERSAFE ACADEMY PTE. LTD.
See if you qualify before applying
Get your match score and detailed fit analysis in 10 seconds.
Job Title: Data & Analytics Engineer
Department: Technology & Digital Transformation
Reports to: Chief Technology Officer (CTO)
Job Grade: Senior Technical Specialist
Position Summary
The Data & Analytics Engineer will architect and maintain the high-integrity data foundation required for our digital transformation under the IMDA Digital Leaders Programme (DLP). Reporting to the CTO, you will be responsible for consolidating disparate legacy systems into a unified data layer that powers both autonomous AI agents and executive-level analytics. You will bridge the gap between raw data collection and actionable insights by building robust pipelines and intuitive, real-time dashboards.
Key Responsibilities
Unified Data Architecture: Design and implement a scalable data lakehouse/warehouse to consolidate data from 13+ disparate systems into a single "source of truth."
ETL/ELT Pipeline Development: Build automated pipelines to ingest, clean, and standardise complex data formats (structured and unstructured) for real-time AI consumption and reporting.
Data Modelling: Create scalable data schemas that standardise inconsistent data formats from multiple business units into a unified structure.
AI-Ready Infrastructure: Optimize the data layer for high-speed retrieval, ensuring the AI team has the structured and unstructured data needed for RAG and real-time monitoring.
Analytic Dashboard: Collaborate with various business divisions to Implement and maintain interactive, executive-level data dashboards that visualize critical KPIs.
Data Quality & Governance: Implement automated validation checks to ensure "100% data integrity," specifically focusing on compliance-heavy environments where accuracy is critical.
Performance Optimization: Monitor and tune database performance to support both high-frequency AI agent requests and complex analytical reporting.
Qualifications & Experience
Education: Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or a related field.
Experience: Minimum 3+ years of experience in data engineering, with a proven track record of managing complex data integration projects.
Core Technical Stack
* Languages: Expert-level SQL and experience in PySpark.
* Data Lake: Cloud object storage (AWS S3, Apache Iceberg, or ADLS Gen2).
* Data Warehousing: Proficiency in AWS Redshift, Snowflake or BigQuery.
* Orchestration: Experience with dbt, Apache Airflow, or GLue.
* Databases: Hands-on experience with both Relational (PostgreSQL, MSSQL) and
NoSQL/Vector databases (MongoDB, Redis, Pinecone)
* Analytics: Proficiency in BI tools such as AWS Quicksight, Tableau, or Power BI.
Data Governance: Strong understanding of data privacy (PDPA), security, and master data management (MDM) principles.
AI Data Lifecycle: Experience in unstructured data preprocessing for AI, including document parsing, metadata enrichment, and "chunking" strategies to optimize data for Vector Databases and RAG pipelines.
Mindset: A detail-oriented engineer who understands that high-quality AI and business decisions are only possible with high-quality, well-structured data.