![Binance logo](https://static.remoteliz.com/static/companies/company-binance-logo.jpeg)
Senior Data Warehouse Engineer
BinanceJob Summary
Binance is seeking a Senior Data Warehouse Engineer to build a universal and flexible data warehouse system that can quickly support the needs of the company. The ideal candidate will have 6+ years of experience in data lake and data warehouse design and development, with a deep understanding of data governance and data warehouse development methodology. Proficiency in Java/Scala/Python and Hive/Spark SQL programming languages is required, as well as familiarity with OLAP technology and Big Data components such as Hadoop, Hive, Spark, Delta Lake, etc. The successful candidate will have excellent analytical and problem-solving skills, with the ability to work independently and collaboratively as part of a technical team. Binance offers a competitive salary, company benefits, and flexible work arrangements, including remote work options.
Job Responsibilities
- According to the company's data warehouse specifications and business understanding, build a universal and flexible data warehouse system that can quickly support the needs and reduce repetitive development work efforts.
- Data model design, development, testing, deployment, online data job monitoring, and the ability to quickly solve complex problems, especially the optimization of complex calculation logic and performance tuning, etc.
- Participate in Data governance, including the construction of the company’s metadata management system and data quality monitoring system.
- Participate in technical team building and learning growth, and contribute to the team’s overall knowledge accumulation and skill improvement.
Job Requirements
- 6+ years experiences of data lake and data warehouse design and development experience.
- Deeply understanding of data warehouse modeling and data governance.
- Solid knowledge of data warehouse development methodology, including dimensional modeling, information factory, and one data, etc.
- Proficient in Java / Scala / Python (at least one language) and Hive & Spark SQL programming languages.
- Familiar with OLAP technology (such as: kylin, impala, presto, druid, etc.).
- Proficient in Big Data batch pipeline development.
- Familiar with Big Data components including but not limited to Hadoop, Hive, Spark, Delta lake, Hudi, Presto, Hbase, Kafka, Zookeeper, Airflow, Elasticsearch, Redis, etc.
- Experiences with AWS Big Data services are a plus.
- Clear mind, with good business requirement understanding, analysis, abstraction, and system design capabilities.
- Those who have experienced data volume above PB level in Internet companies and solve difficult production issues are preferred.