Overview

Hope you are doing well.

Position: Data Engineer
Experience: 5+ Years
Location: Bangalore & Noida

As an Azure Data Engineer, your role will involve designing, developing, and maintaining data solutions on the Azure platform. You will be responsible for building and optimizing data pipelines, ensuring data quality and reliability, and implementing data processing and transformation logic.
Your expertise in Azure Databricks, Python, SQL, Azure Data Factory (ADF), PySpark, and Scala will be essential for performing the following key responsibilities:

Designing and developing data pipelines: You will design and implement scalable and efficient data pipelines using Azure Databricks, PySpark, and Scala. This includes data ingestion, data transformation, and data loading processes.
Data modeling and database design: You will design and implement data models to support efficient data storage, retrieval, and analysis. This may involve working with relational databases, data lakes, or other storage solutions on the Azure platform.
Data integration and orchestration: You will leverage Azure Data Factory (ADF) to orchestrate data integration workflows and manage data movement across various data sources and targets. This includes scheduling and monitoring data pipelines.
Data quality and governance: You will implement data quality checks, validation rules, and data governance processes to ensure data accuracy, consistency, and compliance with relevant regulations and standards.
Performance optimization: You will optimize data pipelines and queries to improve overall system performance and reduce processing time. This may involve tuning SQL queries, optimizing data transformation logic, and leveraging caching techniques.
Monitoring and troubleshooting: You will monitor data pipelines, identify performance bottlenecks, and troubleshoot issues related to data ingestion, processing, and transformation. You will work closely with cross-functional teams to resolve data-related problems.
Documentation and collaboration: You will document data pipelines, data flows, and data transformation processes. You will collaborate with data scientists, analysts, and other stakeholders to understand their data requirements and provide data engineering support.

Skills and Qualifications:

* Strong experience with Azure Databricks, Python, SQL, ADF, PySpark, and Scala.
* Proficiency in designing and developing data pipelines and ETL processes.
* Solid understanding of data modeling concepts and database design principles.
* Familiarity with data integration and orchestration using Azure Data Factory.
* Knowledge of data quality management and data governance practices.
* Experience with performance tuning and optimization of data pipelines.
* Strong problem-solving and troubleshooting skills related to data engineering.
* Excellent collaboration and communication skills to work effectively in cross-functional teams.
* Understanding of cloud computing principles and experience with Azure services.
* Knowledge of big data technologies and distributed computing frameworks (e.g., Hadoop, Spark) is a plus.

Tagged as: azure databrick, pyspark, python, scala, spark, SQL

Before applying for this position you need to submit your online resume. Click the button below to continue.