Role: 1---- Senior Data Engineer (Databricks Expertise)Job Summary:We are seeking highly skilled Azure Data Engineer with strong expertise in Databricks to join our data team. The ideal candidate will design, implement and optimize large-scale data pipeline, ensuring scalability, reliability and performance. This role involves working closely with multiple teams and business stakeholders to deliver cutting-edge data solutions.Key Responsibilities:Data Pipeline Development:Build and maintain scalable ETL/ELT pipelines using Databricks.Leverage PySpark/Spark and SQL to transform and process large datasets.Integrate data from multiple sources including Azure Blob Storage, ADLS and other relational/non-relational systems.Collaboration & Analysis:Work Closely with multiple teams to prepare data for dashboard and BI Tools.Collaborate with cross-functional teams to understand business requirements and deliver tailored data solutions.Performance & Optimization:Optimize Databricks workloads for cost efficiency and performance.Monitor and troubleshoot data pipelines to ensure reliability and accuracy.Governance & Security:Implement and manage data security, access controls and governance standards using Unity Catalog.Ensure compliance with organizational and regulatory data policies.Deployment:Leverage Databricks Asset Bundles for seamless deployment of Databricks jobs, notebooks and configurations across environments.Manage version control for Databricks artifacts and collaborate with team to maintain development best practices.Technical Skills:Strong expertise in Databricks (Delta Lake, Unity Catalog, Lakehouse Architecture, Table Triggers, Delta Live Pipelines, Databricks Runtime etc.)Proficiency in Azure Cloud Services.Solid Understanding of Spark and PySpark for big data processing.Experience in relational databases.Knowledge on Databricks Asset Bundles and GitLab.Preferred Experience:Familiarity with Databricks Runtimes and advanced configurations.Knowledge of streaming frameworks like Spark Streaming.Experience in developing real-time data solutions.Certifications:Azure Data Engineer Associate or Databricks certified Data Engineer Associate certification. (Optional)Role: 2------ Databricks EngineerJob Summary:We are seeking highly skilled Azure Data Engineer with strong expertise in Databricks to join our data team. The ideal candidate will design, implement and optimize large-scale data pipeline, ensuring scalability, reliability and performance. This role involves working closely with multiple teams and business stakeholders to deliver cutting-edge data solutions.Key Responsibilities:Data Pipeline Development:Build and maintain scalable ETL/ELT pipelines using Databricks.Leverage PySpark/Spark and SQL to transform and process large datasets.Integrate data from multiple sources including Azure Blob Storage, ADLS and other relational/non-relational systems.Collaboration & Analysis:Work Closely with multiple teams to prepare data for dashboard and BI Tools.Collaborate with cross-functional teams to understand business requirements and deliver tailored data solutions.Performance & Optimization:Optimize Databricks workloads for cost efficiency and performance.Monitor and troubleshoot data pipelines to ensure reliability and accuracy.Governance & Security:Implement and manage data security, access controls and governance standards using Unity Catalog.Ensure compliance with organizational and regulatory data policies.Deployment:Leverage Databricks Asset Bundles for seamless deployment of Databricks jobs, notebooks and configurations across environments.Manage version control for Databricks artifacts and collaborate with team to maintain development best practices.Technical Skills:Strong expertise in Databricks (Delta Lake, Unity Catalog, Lakehouse Architecture, Table Triggers, Delta Live Pipelines, Databricks Runtime etc.)Proficiency in Azure Cloud Services.Solid Understanding of Spark and PySpark for big data processing.Experience in relational databases.Knowledge on Databricks Asset Bundles and GitLab.Preferred Experience:Familiarity with Databricks Runtimes and advanced configurations.Knowledge of streaming frameworks like Spark Streaming.Experience in developing real-time data solutions.Certifications:Azure Data Engineer Associate or Databricks certified Data Engineer Associate certification. (Optional)Role: 3------Sr. Data Engineer (Python and Snowflake)ResponsibilitiesHands-on development experience with Snowflake features such as Snow SQL; Snow Pipe; Python; Tasks; Streams; Time travel; Zero Copy Cloning; Optimizer; Metadata Manager; data sharing; and stored procedures.Experience in Data warehousing - OLTP, OLAP, Dimensions, Facts, and Data modeling.Need to have working knowledge of MS Azure configuration items with respect to Snowflake.Developing EL pipelines in and out of data warehouse using combination of Data bricks, Python and Snow SQL.Strong understanding or Snowflake on Azure Architecture, design, implementation and operationalization of large-scale data and analytics solutions on Snowflake Cloud Data Warehouse.Developing scripts UNIX, Python etc. to Extract, Load and Transform data, as well as other utility functions.Provide production support for Data Warehouse issues such data load problems, transformation translation problems Translate mapping specifications to data transformation design and development strategies and code, incorporating standards and best practices for optimal execution.Understanding data pipelines and modern ways of automating data pipeline using cloud based testing and clearly document implementations, so others can easily understand the requirements, implementation, and test conditions.Perform code reviews to ensure fit to requirements, optimal execution patterns and adherence to established standards.Requirements: Minimum 8 years of designing and implementing an operational production grade large-scale data solution on Microsoft Azure Snowflake Data Warehouse.Including hands on experience with productionized data ingestion and processing pipelines using Python, Data bricks, Snow SQLExcellent understanding of Snowflake Internals and integration of Snowflake with other data processing and reporting technologies Excellent presentation and communication skills, both written and verbal, ability to problem solve and design in an environment with unclear requirements.Role: 4-------- Data Engineer/Integrator ResponsibilitiesHands-on development experience with Snowflake features such as Snow SQL; Snow Pipe; Python; Tasks; Streams; Time travel; Zero Copy Cloning; Optimizer; Metadata Manager; data sharing; and stored procedures.Experience in Data warehousing - OLTP, OLAP, Dimensions, Facts, and Data modeling.Need to have working knowledge of MS Azure configuration items with respect to Snowflake.Developing EL pipelines in and out of data warehouse using combination of Data bricks, Python and Snow SQL.Strong understanding or Snowflake on Azure Architecture, design, implementation and operationalization of large-scale data and analytics solutions on Snowflake Cloud Data Warehouse.Developing scripts UNIX, Python etc. to Extract, Load and Transform data, as well as other utility functions.Provide production support for Data Warehouse issues such data load problems, transformation translation problems Translate mapping specifications to data transformation design and development strategies and code, incorporating standards and best practices for optimal execution.Understanding data pipelines and modern ways of automating data pipeline using cloud based testing and clearly document implementations, so others can easily understand the requirements, implementation, and test conditions.Perform code reviews to ensure fit to requirements, optimal execution patterns and adherence to established standards.Requirements: Minimum 8 years of designing and implementing an operational production grade large-scale data solution on Microsoft Azure Snowflake Data Warehouse.Including hands on experience with productionized data ingestion and processing pipelines using Python, Data bricks, Snow SQLExcellent understanding of Snowflake Internals and integration of Snowflake with other data processing and reporting technologies Excellent presentation and communication skills, both written and verbal, ability to problem solve and design in an environment with unclear requirements.Role: 4---------Sr Data Engineer w. Java (Senior Software Engineer Big Data)What is the opportunity?Knowledge & ExperienceMust Have:Strong background in problem solving, OOP, data structures and algorithms.10+ years backend application development experience in Java and Spring/Spring Boot5+ years in developing and optimizing Big Data applications using Java/Scala and Spark on Cloudera/HDP.Experience in developing/designing micro-service architecture.Proven hands-on experience in containerization Docker, Kubernetes, Openshift etc.Working knowledge of Jenkins CI, Git, CI/CD pipelinesAbility to work with multiple stakeholders business and technicalNice to HaveExperience in building end to end data pipelines on AWS/Azure and/or DatabricksExperience in large scale on-premise to Cloud migration projectsProgramming experience in Python (2+ years)
Job Title
Senior Data Engineer