← Back to Job Listings
Splice logo

Sr. Data Engineer I

Splice
United States of AmericaFull Time2d

Job Summary

We are seeking a Sr. Data Engineer I to create tools, pipelines, and systems that enable business growth through reliable data operations. The ideal candidate will have 5+ years of experience in building scalable software, Python, SQL, and Unix fundamentals. They should be familiar with data transformation frameworks, business intelligence platforms, and Cloud Infrastructure. As a member of our team, you will work on projects that address scalability issues, automate workflows, and ensure data quality. You will also participate in a business hours-only on-call rotation to ensure system uptime and quality. With flexible remote work options and $4,000/year travel stipends, this is an exciting opportunity to join a fast-growing company. We value curiosity, ownership, and a drive to improve, and we offer equity in the company.

JOB TITLE: Sr. Data Engineer I (US - Remote) LOCATION: Remote

THE ROLE:  

As a member of the Data Engineering team, you will create tools, pipelines, and systems that enable the business to reliably operate at scale, gain mission-critical insight, and power engaging data products for our customers. Your work will provide critical observability into large-scale problems that are front-and-center to the business.  Along the way, you’ll be championing a culture of data literacy and experimentation, enabling Splice to build the best product it possibly can to enable music creators, everywhere!  If this sounds like exciting and fulfilling work to you, apply today!

WHAT YOU’LL DO:

  • Own and operate the structure of our Data Warehouse, ensuring reliable ingestion of mission-critical data and reliable builds of our pipelines.

  • Build and maintain self-service tools and extensible datasets that enable our peers across the organization to get the insight they need. 

  • Identify, scope, and execute projects that address scalability issues in our batch builds, automate manual workflows, and add confidence to our analytics by simplifying our datasets.

  • Ensure the quality of our data by writing tests, building observability into our pipelines, reviewing RFCs, and providing guidance in data modeling.

  • Participate in a business hours-only on-call rotation to ensure the uptime and quality of our systems.

  • Creating and cultivating a culture of data literacy, experimentation, and data-driven decision making.

JOB REQUIREMENTS:

  • 5+ years of experience building scalable and durable software.

  • Demonstrated mastery of Python, SQL, and Unix fundamentals.

  • Demonstrated operational excellence in maintaining Data Warehouses, such as GCP BigQuery or AWS RedShift.

  • Strong familiarity with data transformation frameworks, such as sqlmesh or dbt.

  • Experience with business intelligence platforms or data visualization frameworks like Looker, Hashtable, or Observable.

  • Strong debugging skills, especially with distributed systems.

  • Experience building supporting Cloud Infrastructure with Google Cloud Platform (GCP) and Amazon Web Services (AWS).

  • Clear and consistent communication in a distributed environment.

NICE TO HAVES: 

  • Experience building Infrastructure as Code (IaC) with Terraform. 

  • Demonstrated proficiency with observability tools like StatsD, Datadog, Cloudwatch, etc.

  • Demonstrated proficiency with containers and container orchestration.

 

The national pay range for this role is $142,500 - $155,000. Individual compensation will be commensurate with the candidate's experience.