Tips:
Primary Skills: Vertica/SQL/Python;
Secondary Skills: Spark/Flink/Hive/Python
英语口语流利,能够适应全英文的工作环境
The Role
We are looking fa Data Engineer to be part of our Applications Engineering team. This person will design, develop, maintain support our Enterprise Data Warehouse & BI platform within Tesla using various data & BI tools, this position offers unique opportunity to make significant impact to the entireganization in developing data tools driving data driven culture.
Responsibilities:
• Work in a time constrained environment to analyze, design, develop deliver Enterprise Data Warehouse solutions fTesla’s Sales, Delivery Logistics Teams.
• ETL pipelines using Python, Airflow
• real time data streaming processing using Open source technologies like Kafka , Spark etc.
• Work on creating data pipelines to maintain Datalake in AWS Azure Cloud.
• Work with systems that handle sensitive data with strict SOX controls change management processes.
• Develop collaborative relationships with key business sponsors IT resources fthe efficient resolution of work requests.
• Provide timely accurate estimates fnewly proposed functionality enhancements.
• critical situation
• Communicate technical business topics, as appropriate, in a 360 degree fashion, when required; communicate using written, verbal and/presentation materials as necessary.
• Develop, enforce, recommend enhancements to Applications in the area of standards, methodologies, compliance, quality assurance practices; participate in design code walkthroughs.
• Utilize technical domain knowledge to develop implement effective solutions; provide hands on mentoring to team members through all phases of the Systems Development Life Cycle (SDLC) using Agile practices.
Qualifications:
Minimum Qualifications:
• 3+ years of experience in Cloud Technologies like AWS Azure
• 3+ years of experience in creating data pipelines using Python
• 3+ years of experience in Data Modelling
• Must have strong experience in Data Warehouse ETL design development, methodologies,
tools, processes best practices
• Strong experience in stellar dashboards reports creation fC-level executives
Preferred Qualifications:
• 3+ years of development experience in Open Source technologies like Python, Java
• Experience in Big Data processing using Apache Hadoop/Spark ecosystem applications like Hadoop,
Hive, Spark, Kafka HDFS preferable
• Excellent query writing skill communication skills
• Familiarity with common API’s: REST, SOAP
更多