Data Engineer – AWS Redshift

Fusemachines Nepal Pvt. Ltd. , Kathmandu, Nepal — Posted 11 month ago

We are actively seeking a highly skilled and experienced AWS Data Engineer to join our team in a remote, full-time consulting role on a contract basis, starting at 2 PM EST. In this key position, you will be entrusted with designing, constructing, and maintaining the infrastructure and code necessary for efficient data integration, storage, and processing. The focus of this role is on a media company dedicated to understanding consumer preferences, attitudes, and behaviors by collecting data through surveys and passive measurements. Your responsibilities include applying data science methodologies and utilizing enterprise-grade software to derive insights from consumer data.

Responsibilities:

  • Follow established designs and construct data architectures, developing and maintaining data pipelines to ensure smooth data flow from source to destination.
  • Handle ELT processes, including data extraction, loading, and transformation.
  • Contribute to data quality assurance efforts by implementing data validation checks and tests to ensure accuracy, completeness, and consistency of data.
  • Test software solutions to meet product quality standards before release to QA.
  • Ensure the reliability, scalability, and efficiency of data systems are maintained at all times. Identify and resolve performance bottlenecks in pipelines due to data, queries, and processing workflows to ensure efficient and timely data delivery.
  • Work with DevOps teams to optimize resources, such as storage.
  • Provide guidance and mentorship to junior data engineers, fostering their professional growth.
  • Assist in the configuration and management of data warehousing and data lake solutions.
  • Collaborate closely with cross-functional teams, including Product, Engineering, Data Scientists, and Analysts, to thoroughly understand data requirements and provide data engineering support.
  • Take ownership of the storage layer, managing tasks like schema design, indexing, and performance tuning.
  • Evaluate and implement cutting-edge technologies, continuing to learn and expand skills in data engineering and cloud platforms.
  • Develop, design, and execute data governance strategies encompassing cataloging, lineage tracking, quality control, and data governance frameworks that align with current analytics demands and industry best practices, working closely with the Data Architect.
  • Ensure technology solutions support the needs of the customer and/or organization.
  • Define and document data engineering processes and data flows.

Qualification / Skill Set Requirements:

  • A minimum of 3+ years of real-world data engineering development experience with AWS (certifications are a plus), and at least 5+ years in engineering roles.
  • Proficiency in one or more programming languages such as Python, C#, with a demonstrated ability to write efficient and optimized code for data integration, storage, processing, and manipulation.
  • Strong understanding and experience with SQL, including writing advanced queries and optimizing them for performance.
  • Proficient in SDLC tools and technologies, encompassing project management software (Jira), source code management (GitHub), CI/CD systems (GitHub Actions, AWS CodeBuild, or similar), and binary repository managers (AWS CodeArtifact or similar).
  • Sound understanding of Data Modeling and Database Design Principles, with the ability to design and implement efficient database schemas based on requirements.
  • Skill in Data Integration from various sources, including APIs, SQL/NoSQL databases, flat files, and event streaming in different formats.
  • Strong experience with ELT and ETL tools, and the ability to develop custom integration solutions as needed, including designing and implementing Data Warehousing solutions.
  • Proficiency in scalable and distributed Data Technologies such as Spark/PySpark, DBT, and Kafka.
  • Expertise in Cloud Computing with AWS and Azure, including familiarity with a variety of AWS services like Redshift, Lambda, Kinesis, S3, Lake Formation, EC2, ECS/ECR, IAM, CloudWatch, CosmosDB, etc.
  • Understanding of Data Quality and Governance, involving the implementation of data quality checks and data monitoring processes.
  • Excellent problem-solving skills and attention to detail.
  • Experience with relational SQL and NoSQL databases, including Postgres, CosmosDB, MongoDB, ElasticSearch.
  • Utilize BI tools, including experience in Data Visualization, to enable data-driven decision-making within the organization.
Details
Information Technology (IT) Industry
Full-time Job Type
Mid-Level Job Level
Job post expired. Apply anyway, we'll forward your CV.