Requirements:
Expertise in data engineering, including data processing and ETL.
Proficiency in SQL and NoSQL databases.
Experience with Hadoop or Spark.
Experience in AWS or Azure.
Snowflake is an added advantage.
Responsibilities:
Design, build, and maintain efficient and reliable data pipelines.
Develop, construct, test, and maintain database architectures (e.g., data warehouses, data lakes).
Collaborate with data scientists and analysts to understand data requirements and translate them into technical specifications.
Implement complex ETL processes to streamline data ingestion, transformation, and storage.
Ensure the integrity, accessibility, and security of data across the organization.
Optimize data systems for performance and scalability.
Troubleshoot and resolve data-related problems.
Monitor and improve data quality consistently.