Job description
We are seeking a skilled data engineer to join our team and be responsible for designing, building, and maintaining scalable and efficient data infrastructure and pipelines. The ideal candidate should have strong knowledge of data modeling, data warehousing, and ETL (extract, transform, load) processes. They should have experience working with large and complex datasets, as well as designing and implementing robust data pipelines that integrate with various data sources and systems. Additionally, the candidate should have excellent problem-solving skills, attention to detail, and be able to work in a fast-paced environment to deliver high-quality code on time.
Responsibilities:
- Design and implement scalable and efficient data pipelines that integrate with various data sources and systems.
- Develop and maintain data warehousing and data modeling solutions.
- Extract, transform, and load large and complex datasets.
- Design and implement data processing algorithms to ensure data quality and consistency.
- Collaborate with data analysts and data scientists to provide them with access to relevant data and analytics tools.
- Manage and optimize the performance of the data infrastructure and pipelines.
- Ensure data security, privacy, and compliance with relevant regulations.
- Monitor and troubleshoot issues with data pipelines and infrastructure.
- Keep up-to-date with the latest data engineering trends and technologies.
- Communicate with stakeholders to understand their requirements and provide solutions that meet their needs.
Requirements
- Bachelor's degree in Computer Science, Software Engineering, or related field.
- Strong proficiency in SQL and at least one programming language such as Python, Java, or Scala.
- Experience with data warehousing and data modeling solutions such as Snowflake, Redshift, or BigQuery.
- Experience with ETL (extract, transform, load) processes and tools such as Apache Airflow, Talend, or SSIS.
- Familiarity with distributed computing and big data technologies such as Hadoop, Spark, or Kafka.
- Experience with version control systems such as Git.
- Excellent problem-solving skills and attention to detail.
- Ability to work collaboratively with data analysts, data scientists, and stakeholders.
- Ability to work in a fast-paced environment and deliver high-quality code on time.
- Excellent communication skills.
Who are you?
- Master's degree in Computer Science, Software Engineering, or related field.
- 5+ years of experience in data engineering.
- Experience with cloud-based data engineering solutions such as AWS, Azure, or Google Cloud.
- Experience with data visualization and analytics tools such as Tableau, Power BI, or Looker.
- Familiarity with Agile development methodologies.
Apply here
Make your first move in giving your career a massive push forward.