lock-svg project
Successfully occupied
View project information dropdown icon
Wallet icon Coin icon Rate 6 200 € - 7 300 € / month info
Timer icon Form of cooperation Full-time / Part-time / 50% Remote
Briefcase icon Sector Telco
Location icon Location Bratislava

info The reward is calculated upon delivery of 20 MD per month (1MD=8h)

Project duration 10 months with the possibility of extension
Period of cooperation 01.12.2025 - 30.09.2026
Start date 01.12.2025 asap, or by agreement
Technology
  • UNIX/Linux
  • Apache Hive
  • Apache Kafka
  • Apache Spark
  • Hadoop
  • Kubernetes
  • Docker
  • Grafana
  • Prometheus
Languages
  • English flag English - active, B2/C1/C2
  • Slovak or Czech flag Slovak or Czech - native

Project description

  • Collaboration in the administration and development of data infrastructure supporting AI initiatives
  • Responsibility for the operation of the Hadoop and Apache ecosystem, Kubernetes clusters, CI/CD pipelines, and ensuring a stable and efficient environment for data scientists and ML engineers
  • Tasks and responsibilities:
    • Administration and maintenance of infrastructure (Hadoop stack – HDFS, Hive, Impala, Spark; Apache Kafka, Beam, NiFi, Airflow)
    • Operation and maintenance of Kubernetes clusters, databases, and object storage systems
    • Management of user access, RBAC, networking, firewall rules
    • Automation of provisioning processes (Conda, Docker, Kubernetes)
    • Creation and maintenance of CI/CD pipelines for data platforms
    • Monitoring, logging, backup & restore (Prometheus, Grafana, ELK stack)
    • Incident resolution and support for data and ML teams
    • Ensuring high availability and performance of the infrastructure
       
  • Collaboration possible in a hybrid mode REMOTE + ON-SITE, with ON-SITE [ Bratislava ] 3 days a week, or as agreed on the project / REMOTE
  • Initial period on the project full-time, from next year part-time
  • Final remuneration depends on the candidate's experience

Project requirements

  • min. 2–3 years of experience with data infrastructure administration in a production environment
  • Excellent knowledge of Linux systems, networking, and security best practices
  • Knowledge of the Apache stack (Kafka, NiFi, Beam, Airflow)
  • Experience with CI/CD pipelines and deployment automation
  • Experience with:
    • Hadoop ecosystem (HDFS, Hive, Spark, Impala)
    • Docker and Kubernetes
    • Monitoring and logging (Prometheus, Grafana, ELK stack)
  • Ability to solve incidents and troubleshooting in production
  • Advantageous experience with/in:
    • Experience with the Telekom sector or large distributed systems
    • Experience with GCP or AWS
    • Experience with MLflow and experiment tracking
    • Experience with scripting and automation (Bash, Python, Ansible, Saltstack)
       
  • Very good organizational and communication skills
  • Analytical and conceptual thinking
  • Team player
  • Ability to learn quickly
  • Leadership
     
  • Tech stack
    • Linux
    • Hadoop (HDFS, Hive, Impala, Spark), Kafka, NiFi, Beam, Airflow
    • Kubernetes, Docker
    • Prometheus, Grafana, ELK, CI/CD, MLflow, Python/Bash, GCP/AWS
Are you interested in this project?
Recommend an IT specialist Do you know anyone who could use this project? Recommend him and get a reward 700 €!
Hire an IT specialist Do you need a similar IT freelancer for your project? Hire a specialist
New to the world of IT freelancing ?

Freedom, flexibility, greater control over finances and career. Freelancing has evolved and offers much more today. See what's in store for you and how it will change your life.

Are you interested in this project?
Recommend an IT specialist Do you know anyone who could use this project? Recommend him and get a reward 700 €!
Hire an IT specialist Do you need a similar IT freelancer for your project? Hire a specialist
30 443

Titans that have
joined us

722

Clients that have
joined us

625 107

Succcessfully supplied
man-days