Essential Duties & Responsibilities
Create and maintain optimal data pipeline architecture,
Assemble large, complex data sets that meet functional / non-functional business requirements.
Identify, design, and implement internal process improvements: automating manual processes,
optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Build the infrastructure required for optimal extraction, transformation, and loading of data
from a wide variety of data sources using NoSQL, SQL and AWS technologies.
Build analytics tools that utilize the data pipeline to provide actionable insights into customer
acquisition, operational efficiency and other key business performance metrics.
Work with stakeholders including the Executive, Product, Data and Design teams to assist with
data-related technical issues and support their data infrastructure needs.
Work with data and analytics experts to strive for greater functionality in our data systems.
Requirements
Computer Science degree or equivalent experience.
3-5 years of experience in data engineering projects in a cross-functional environment, creating
and optimizing data pipeline architecture, as well as ensuring data quality.
A master of data querying for relational and/or NoSQL data stores
Experience with multiple data formats: CSV, JSON, Parquet
Experience with relational SQL and NoSQL databases, including MongoDB, Redshift and MySQL.
Experience with AWS cloud services: EC2, S3, Redshift
www.AnchorLoans.com Equal Opportunity Employer
Preferred, but not required
Advanced working SQL knowledge and experience working with relational databases, query
authoring (SQL) as well as working familiarity with a variety of databases.
Experience performing root cause analysis on internal and external data and processes to
answer specific business questions and identify opportunities for improvement.
Strong analytic skills related to working with unstructured datasets.
Build processes supporting data transformation, data structures, metadata, dependency and
workload management using Python and Airflow.
A successful history of manipulating, processing and extracting value from large disconnected
datasets.
Strong project management and organizational skills.
Experience supporting and working with cross-functional teams in a dynamic environment.
Experience with financial data.
Essential Duties & Responsibilities
Create and maintain optimal data pipeline architecture,
Assemble large, complex data sets that meet functional / non-functional business requirements.
Identify, design, and implement internal process improvements: automating manual processes,
optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Build the infrastructure required for optimal extraction, transformation, and loading of data
from a wide variety of data sources using NoSQL, SQL and AWS technologies.
Build analytics tools that utilize the data pipeline to provide actionable insights into customer
acquisition, operational efficiency and other key business performance metrics.
Work with stakeholders including the Executive, Product, Data and Design teams to assist with
data-related technical issues and support their data infrastructure needs.
Work with data and analytics experts to strive for greater functionality in our data systems.
Requirements
Computer Science degree or equivalent experience.
3-5 years of experience in data engineering projects in a cross-functional environment, creating
and optimizing data pipeline architecture, as well as ensuring data quality.
A master of data querying for relational and/or NoSQL data stores
Experience with multiple data formats: CSV, JSON, Parquet
Experience with relational SQL and NoSQL databases, including MongoDB, Redshift and MySQL.
Experience with AWS cloud services: EC2, S3, Redshift
www.AnchorLoans.com Equal Opportunity Employer
Preferred, but not required
Advanced working SQL knowledge and experience working with relational databases, query
authoring (SQL) as well as working familiarity with a variety of databases.
Experience performing root cause analysis on internal and external data and processes to
answer specific business questions and identify opportunities for improvement.
Strong analytic skills related to working with unstructured datasets.
Build processes supporting data transformation, data structures, metadata, dependency and
workload management using Python and Airflow.
A successful history of manipulating, processing and extracting value from large disconnected
datasets.
Strong project management and organizational skills.
Experience supporting and working with cross-functional teams in a dynamic environment.
Experience with financial data.
1.The more the Jobs you apply, the higher your chances of getting a job.
2. Keep your profile updated Update
Recruiters prefer candidates with complete profile information.
3. Keep visiting the Teamlease.com daily
Daily visit will ensure you won’t miss out on any Job opportunity.
4. Watch videos to improve Watch videos
Be a better candidate than others by watching these Job-related videos.