Binance Accelerator Program - Data Warehouse Engineer
BinanceJob Summary
The Binance Accelerator Program is a 3-6 month program for early career talent to experience life at Binance and gain an immersive understanding of the blockchain ecosystem. As a Data Warehouse Engineer, you will build a universal and flexible data warehouse system, participate in data governance, and contribute to technical team building and learning growth. You will work with world-class talent in a user-centric global organization with a flat structure, tackling unique projects with autonomy in an innovative environment. This program offers competitive salary, company benefits, and the opportunity to shape the future of blockchain technology. With flexible remote work options, you can thrive in this results-driven workplace and propel your career forward. By joining Binance, you will be part of a diverse workforce that is fundamental to our success.
Requirements
- Undergraduate in a quantitative discipline, such as Mathematics/Statistics, Actuarial Sciences, Computer Science, Engineering, or Life Sciences.
- Understanding of data warehouse modeling and data governance. Knowledge of data warehouse development methodology, including dimensional modeling, information factory etc.
- Proficient in Java / Scala / Python (at least one language) and Hive & Spark SQL programming languages.
- Familiar with OLAP technology (such as: kylin, impala, presto, druid, etc.).
- Knowledgable in Big Data batch pipeline development.
- Familiar with Big Data components including but not limited to Hadoop, Hive, Spark, Delta lake, Hudi, Presto, Hbase, Kafka, Zookeeper, Airflow, Elastic search, Redis, etc.
- Experiences with AWS Big Data services are a plus.
- Have a strong team collaboration attitude and develop partnerships with other teams and businesses.
- Experience in real-time data processing, familiar with stream processing frameworks such as Apache Kafka, Apache Flink, in-depth knowledge of Lakehouse technology, practical project experience, proficiency in StarRocks, including its data model design, query optimization and performance tuning.
- Experience in knowledge graph construction and application, and knowledge of graph databases such as Nebula, etc.
Responsibilities
- According to the company's data warehouse specifications and business understanding, build a universal and flexible data warehouse system that can quickly support the needs and reduce repetitive development work efforts.
- Data model design, development, testing, deployment, online data job monitoring, and the ability to quickly solve complex problems, especially the optimization of complex calculation logic and performance tuning, etc.
- Participate in Data governance, including the construction of the company’s metadata management system and data quality monitoring system.
- Design and implement a data platform integrated with data lake warehouse to support real-time data processing and analysis requirements.
- Build knowledge graph, and provide in-depth business insight.
- Participate in technical team building and learning growth, and contribute to the team’s overall knowledge accumulation and skill improvement.