Junior Data Engineer
Our partner is a leading programmatic media company, specializing in ingesting large volumes of data, modeling insights, and offering a range of products and services across Media, Analytics, and Technology. Among their clients are well-known brands such as Walmart, Barclaycard, and Ford.
The company has expanded to over 700 employees, with 15 global offices spanning four continents. With the imminent opening of a new office in Warsaw, we are seeking experienced Data Engineers to join their expanding team.
The Data Engineer will be responsible for developing, designing, and maintaining end-to-end optimized, scalable Big Data pipelines for our products and applications. In this role, you will collaborate closely with team leads across various departments and receive support from peers and experts across multiple fields.
Opportunities:
- Possibility to work in a successful company
- Career and professional growth
- Competitive salary
- Hybrid work model (combine work from home with office space in the heart of Warsaw city)
- Long-term employment with 20 working days of paid vacation, sick leaves, and national holidays
Responsibilities:
- Follow and promote best practices and design principles for Big Data ETL jobs
- Help in technological decision-making for the business’s future data management and analysis needs by conducting POCs
- Monitor and troubleshoot performance issues on data warehouse/lakehouse systems
- Provide day-to-day support of data warehouse management
- Assist in improving data organization and accuracy
- Collaborate with data analysts, scientists, and engineers to ensure best practices in terms of technology, coding, data processing, and storage technologies
- Ensure that all deliverables adhere to our world-class standards
Skills:
- 1+ years of overall experience in Data Warehouse development and database design
- Deep understanding of distributed computing principles
- Experience with AWS cloud platform, and big data platforms like EMR, Databricks, EC2, S3, Redshift
- Experience with Scala, Spark, Hive, Yarn/Mesos, etc.
- Experience in SQL and NoSQL databases, as well as experience with data modeling and schema design
- Proficiency in programming languages such as Java, Scala, or Python for implementing data processing algorithms and workflows
- Experience with Presto and Kafka is a plus
- Experience with DevOps practices and tools for automating deployment, monitoring, and management of big data applications is a plus
- Excellent communication, analytical, and problem-solving skills
- Knowledge of scalable service architecture
- Experience in scalable data processing jobs on high-volume data
- Self-starter, proactive, and able to work to deadlines
If you are looking for an environment where you can grow professionally, learn from the best in the field, balance work and life, and enjoy a pleasant and enthusiastic atmosphere, submit your CV at cv@quontex.co and become part of our team!
Everything you do will help us lead the programmatic industry and make it better.