The reward is calculated upon delivery of 20 MD per month (1MD=8h)
Design and development of robust data pipelines on the Databricks platform
Optimization of data transformations in Spark and SQL environments
Management and integration of data sources from Azure (Data Factory, Cosmos DB, Event Store, etc.)
Creating and managing data structures using Delta Lake
Implementation of security and access policies in cloud services
Testing and verification of data solutions according to internal standards
Collaboration with teams across the company in the design of data models and reports
Versioning code, deploying changes using CI/CD pipelines, and working with Docker containers
Minimum 6 years of experience in data engineering
Excellent knowledge of Python and software engineering principles
Proven experience with Spark and SQL in processing large datasets
Extensive experience with the Databricks environment, including the latest features
Knowledge of Delta Lake and data warehousing concepts
Practical experience with cloud technologies, especially Azure
Overview of Azure Data Factory, Cosmos DB, Event Store, ADLS, Key Vault tools
Excellent orientation in both relational and NoSQL databases and their appropriate use
Experience with the "Data as Code" concept and versioning tools (Git), CI/CD, Docker (must-have), Kubernetes (advantage)
Knowledge of reporting tools like Power BI is an advantage
Experience with ML/AI is an advantage
Excellent communication, presentation, and teamwork skills
Freedom, flexibility, greater control over finances and career. Freelancing has evolved and offers much more today. See what's in store for you and how it will change your life.
Titans that have
joined us
Clients that have
joined us
Succcessfully supplied
man-days