You will be part of the team working on a challenging and interesting project in international setup and you will have the opportunity to work on modern web applications and services.
Project, teams, requirements & duties
What you will be working on?
Create and maintain optimal data pipeline architecture
Establish and maintain the set of business reporting requirements
Research and validation of entity data using appropriate internal and external data sources
Build the infrastructure required for optimal extraction, transformation, and loading of data using SQL and AWS ‘big data’ technologies
Work with data and analytics experts to strive for greater functionality in data systems
What do we expect from you?
A passion for data interpretation and a highly numerate capability with great attention to numeric detail and accuracy
Excellent problem solving and troubleshooting skills
Good analytical skills, detail-oriented, strong technical skill and willingness to learn
SQL proficiency
Experience in ETL tooling/concepts
Knowledge of Python and/or Java
Working experience with cloud technologies, preferably AWS
Familiarity with DWH concepts (Redshift)
As a plus:
Familiarity with big data technology stack (Hadoop, Spark, Kafka, Pandas)
Unix/Linux and shell scripting (with sed, awk,…)
Our projects & stack: The application that you will work will be in Python, Scala, AWS Cloud/Service, Spark, Pandas etc. We are building data pipelines, data lakes, API service to support data scientist and BI reporting.
Our teams: Depending on the project setup, our teams can consist of data engineers, scientist, backend developers, test developers, DevOps engineers and a delivery manager.
Your position in the organization: This position will be a part of one of our Data departments in Novi Sad and your department manager will be there to guide you and support your career development.
What do we offer?
And much more! You will hear more details during the interview!