Vacancies: Senior Python Data Engineer (Databricks, Delta Lake)

Hiring Position: Senior Python Data Engineer (Databricks, Delta Lake)

Office Location: Thao Dien, District 2, Ho Chi Minh City, Vietnam

Minimum Year of Experience: From 4 years

Level: Senior

Job Type: In Office

Contract type: Fulltime

Tech stack: #Python #Data Engineer #PostgreSql

We are seeking a highly skilled Senior Data Engineer with a strong background in Databricks, Data Lake, ElasticSearch and Database to join our team. As a Senior Data Engineer, you will play a crucial role in designing, building, and maintaining scalable data solutions to support our organization’s data-driven decision-making processes. You will work closely with cross-functional teams to develop efficient data pipelines and ensure the reliability and quality of our data infrastructure

 

Your Role & Responsibilities

  • Design, develop, and maintain data pipelines using Databricks, Delta Lake, and other relevant technologies to support data analytics and machine learning initiatives.
  • Manage and optimize data storage solutions, including Data Lake, to ensure high performance and scalability.
  • Work with NoSQL, SQL databases such as PostgreSQL, MongoDB to handle diverse data sets and unstructured data or structured data. 
  • Implement ETL/ELT processes and data modeling solutions to transform raw data into valuable insights.
  • Write complex SQL queries to manage, analyze, and manipulate data across various relational databases.
  • Collaborate with data scientists, analysts, and other stakeholders to gather requirements and ensure data solutions align with business needs.
  • Utilize programming languages like Python for data processing and automation of workflows.
  • Monitor and troubleshoot data pipelines to ensure data quality, consistency, and availability.
  • Leverage data pipeline orchestration tools such as Apache Airflow or Prefect to automate and schedule data workflows.
  • Continuously improve data engineering practices by adopting new tools, techniques, and best practices in the industry.

Requirements:

  • 4+ years of experience in data engineering or a related role.
  • Hands-on experience with Databricks and Delta Lake.
  • Proficient in SQL for querying and managing relational databases.
  • Experience with NoSQL databases like MongoDB or Cassandra.
  • Strong programming skills in Python for data processing.
  • Familiarity with cloud platforms, linux.
  • Experience with data pipeline orchestration tools such as Apache Airflow, Prefect, or similar.
  • In-depth understanding of data warehousing concepts, ETL/ELT processes, and data modeling.
  • Strong problem-solving skills with great attention to detail.

Preferred Qualifications:

  • Experience with big data technologies like Apache Spark, Hadoop, ElasticSearch or Kafka.
  • Familiarity with CI/CD practices and tools for data engineering

Benefits for you:

  • Highly competitive salary (negotiable)
  • 13th month salary
  • Opportunities for professional growth and development in a fast-growing startup
  • Exciting projects and cutting-edge technologies
  • Opportunity to make a significant impact in a startup ready to invest in top talent

Recruitment Progress:

  • Round 1: In person job interview
  • Round 2: Job offers and immediate start