lock-svg project
Successfully occupied
View project information dropdown icon
Wallet icon Coin icon Rate 7 000 € - 8 000 € / month info
Timer icon Form of cooperation Full-time / 80% Remote
Briefcase icon Sector Banking
Location icon Location Viedeň

info The reward is calculated upon delivery of 20 MD per month (1MD=8h)

Project duration 12 months with the possibility of extension
Period of cooperation 01.04.2026 - 31.03.2027
Start date 01.04.2026 or by agreement
Technology
  • MS Azure
  • SQL
  • SCRUM
  • Python
  • Azure Databricks
  • PySpark
Languages
  • English flag English - active, B2/C1/C2
  • Slovak or Czech flag Slovak or Czech - native

Project description

  • Collaboration on migrating existing on-premise data solutions to the Azure Databricks environment and subsequent optimization of data processing in terms of performance and cost
    • Migration of existing solutions from Cloudera and Oracle on-premise to the modern cloud environment Azure Databricks
  • The main goal is:
    • Build and optimize Data Lake and subsequent datamarts
    • Ensure efficient data processing in terms of performance and cost
    • Modernize existing ETL / ELT processes to cloud-native architecture
  • Tasks and responsibilities:
    • Migration of data workloads from Cloudera and Oracle to Azure Databricks
    • Design and implementation of data pipelines in PySpark
    • Optimization of data processing performance (partitioning, caching, tuning)
    • Optimization of Databricks processing costs
    • Creation of SQL transformations and data models for datamarts
    • Collaboration with architects, analysts and platform teams
    • Documentation of solutions and migrations
    • Identification of opportunities to improve existing data processes
  • Agile development method [Scrum] supported by DevOps standards
  • Collaboration in an international team
     
  • Collaboration possible in hybrid mode, ON-SITE by agreement on the project [Vienna] / rest REMOTE
  • The final remuneration depends on the candidate's experience and the course of the selection process

Project requirements

  • Min. 4 years of demonstrable project experience in the position of Data Engineer
  • Practical experience with Azure Databricks
  • Strong knowledge of:
    • PySpark
    • Python
    • SQL
  • Experience with:
    • Migration of data solutions to the cloud
    • Development and optimization of data pipelines
  • Ability to solve performance tuning and cost optimization
  • Good communication skills and teamwork
     
  • Active knowledge of English in spoken and written form (min. B2-C1)
  • Active knowledge of Slovak or Czech language in spoken and written form (min. B2-C1)
     
  • Advantage:
    • Experience with Cloudera / Hadoop ecosystem
    • Experience with Oracle DWH solutions
    • Knowledge of modern data lakehouse architectures
    • Experience with banking or regulated environment
    • Knowledge of CI/CD or DataOps approaches
       
  • Strong analytical skills
  • Ability to react flexibly and adapt to change
  • Proactive approach
     
  • Technology stack:
    • Azure Databricks
    • PySpark
    • Python
    • SQL
    • Cloudera / Hadoop (legacy)
    • Oracle (legacy)
    • Azure Data Platform
Are you interested in this project?
Recommend an IT specialist Do you know anyone who could use this project? Recommend him and get a reward 760 €!
Hire an IT specialist Do you need a similar IT freelancer for your project? Hire a specialist
New to the world of IT freelancing ?

Freedom, flexibility, greater control over finances and career. Freelancing has evolved and offers much more today. See what's in store for you and how it will change your life.

Are you interested in this project?
Recommend an IT specialist Do you know anyone who could use this project? Recommend him and get a reward 760 €!
Hire an IT specialist Do you need a similar IT freelancer for your project? Hire a specialist
31 537

Titans that have
joined us

735

Clients that have
joined us

674 065

Succcessfully supplied
man-days