lock-svg project
Successfully occupied
View project information dropdown icon
Wallet icon Coin icon Rate 7 000 € - 8 400 € / month info
Timer icon Form of cooperation Full-time
Briefcase icon Sector Insurance
Location icon Location 100% Remote

info The reward is calculated upon delivery of 20 MD per month (1MD=8h)

Project duration 6-12 months with the possibility of extension
Period of cooperation 01.08.2025 - 31.01.2026
Start date 01.08.2025 ASAP or by agreement
Technology
  • Apache Hive
  • ETL
  • SQL
  • SCRUM
  • Apache Spark
  • BUILD - Gradle
  • Hadoop
  • Python
  • Palantir Foundry
  • Java
  • Azure DevOps
  • PySpark
  • Azure Machine Learning
Languages
  • English flag English - active, B2/C1/C2

Project description

  • designing, developing, and maintaining scalable data transformation pipelines in the insurance sector
  • designing a data model, implementing data architecture
  • evaluating new analytical platform opportunities, developing prototypes and helping with the creation (development of a single source of truth about the application environment)
  • collaboration within the global development team in the design and delivery of solutions
  • assisting stakeholders in solving functional and technical problems related to data
  • close cooperation with Product Owners and architects to understand requirements, formulate solutions and evaluate implementation efforts
  • cooperation with the data management and governance platform
  • collaboration on the implementation of the solution on the Palantir Foundry platform
     
  • cooperation possible in hybrid ON-SITE mode [Bratislava] + rest REMOTE or in full REMOTE mode (based on agreement with the customer)
  • final remuneration depends on the candidate's experience and the course of the selection process

Project requirements

  • min. 4-year demonstrable project experience in data and/or software engineering in large data projects
  • advanced knowledge of:
    • Python and the PySpark framework for creating and optimizing complex data pipelines
    • SQL [Spark SQL] for data retrieval and manipulation
  • advanced experience with:
    • working with large datasets on big data platforms and distributed computing
    • Scrum/Agile development methodologies
    • working in a global distributed team in a multicultural environment
  • experience with:
    • DWH concepts and ETL techniques
    • Spark, Hive, Hadoop
    • JavaScript, TypeScript, HTML, CSS 
    • Java, Gradle 
    • working in an Azure cloud environment
    • working in Azure DevOps
    • working in Palantir Foundry (Data Layers, Ontology Layers, Contour, Code Workbook, Carbon Workspace)
    • projects in the insurance or financial sector
    • AI/ML projects
       
  • active knowledge of English at a communicative level (min. B2-C1)
  • min. bachelor's or equivalent degree in computer science, data science, or a similar discipline
     
  • strong analytical and organizational skills
  • ability to effectively solve problems
  • independence and willingness to learn
  • strong interpersonal and communication skills (written and verbal)
Are you interested in this project?
Recommend an IT specialist Do you know anyone who could use this project? Recommend him and get a reward 800 €!
New to the world of IT freelancing ?

Freedom, flexibility, greater control over finances and career. Freelancing has evolved and offers much more today. See what's in store for you and how it will change your life.

Are you interested in this project?
Recommend an IT specialist Do you know anyone who could use this project? Recommend him and get a reward 800 €!
30 071

Titans that have
joined us

710

Clients that have
joined us

589 995

Succcessfully supplied
man-days