#Siemens is Hiring
Change the future with us
You Build and maintain efficient data pipeline architecture
Should Assemble large, sophisticated data sets that meet functional / non-functional business requirements!
You Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
You Build the infrastructure required for efficient extraction, transformation, and loading of data from a wide variety of data sources using AWS technologies.
Work with partners including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.
We don’t need superheroes, just super minds
You have exp in 2- 4 years of Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
Good Experience building and optimizing big data pipelines, architectures and data sets.
Should also have experience using the following software/tools:
Experience with big data tools: Hadoop, Spark etc.
Experience with data pipeline and workflow management tools: CI/CD, Azkaban etc.
Experience with AWS cloud services: Glue, Snowflake, Lambda, S3, Athena
Experience with object-oriented/object function scripting languages: Python, Java, Scala, etc.
Apply Here: https://sie.ag/3lz318S