Senior Data Platform Engineer - Vilnius, Lietuva - Oxylabs

    Oxylabs
    Oxylabs Vilnius, Lietuva

    Rasta: beBee S2 LT - prieš 1 savaitę

    Oxylabs background
    Visą darbo dieną
    Aprašymas

    We are a market-leading web intelligence collection platform, providing premium proxies and data scraping solutions for large-scale public web data gathering. Today, we unite over 450 data industry professionals for one purpose: to create a future where all businesses have access to big data and business intelligence, and a work environment where everyone can grow and thrive.

    You will become a part of a cross-functional team building a petabyte-scale data platform. The challenges in this quest are numerous - managing vast amounts of data, ensuring high performance and reliability, and dealing with complex data integration and processing tasks. With your expertise in cloud-native technologies and data-intensive systems, combined with your eagerness to learn new technologies in data analytics and engineering, we are confident in our ability to overcome these challenges together. As part of a small team, you will enjoy significant autonomy, empowering you to bring forward and develop your ideas. Your contributions will directly impact the effectiveness and evolution of our data platform. We believe your innovative mindset, strong sense of ownership, problem-solving skills, and ability to work collaboratively in a dynamic environment will be key to our collective success.

    Your day-to-day:

  • Deploy, maintain, and monitor data platform infrastructure using Kubernetes, Helm, and related technologies.
  • Tackle the challenges of high availability, reliability, and scalability of the data platform.
  • Collaborate with cross-functional teams to troubleshoot issues, design and implement new data platform solutions.
  • Assist data engineers and data analysts in troubleshooting issues and improving their workflow.
  • Implement automated deployment and testing processes.Your skills & experience:
  • Strong experience with Kubernetes, Helm, and cloud-native technologies.
  • Understanding of networking, security, and data storage concepts.
  • Willingness to learn about data processing systems and data engineering.
  • Experience in deploying and maintaining data-intensive systems (for example Apache Kafka).
  • Python programming skills.
    It would be great if you have:
  • Experience working with on-premises or cloud data platforms.
  • Java and Scala programming skills.
  • Experience with Spark, Trino, Iceberg or other data processing systemsYour future tech:
  • Kubernetes
  • Apache Spark
  • Python
  • SQL
  • Dagster
  • Apache Iceberg
  • DBT
  • Apache Kafka
  • Trino
  • ArgoCDSalary:
  • Gross salary: EUR/month. Keep in mind that we are open to discussing a different salary based on your skills and experience.Up for the challenge? Let's talk