
Specialist Solutions Architect - Data Engineering
Databricks
- Location
- Costa Rica
- Posted
Data Engineering Specialist Solutions Architect - guide customers on big data solutions on Databricks
Databricks
Data Engineering Specialist Solutions Architect - guide customers on big data solutions on Databricks
Awin Global
Data Engineer (AI/NLP) at Awin: Build data pipelines, ensure data quality, and collaborate with data scientists.
Awin Global
Data Engineer (AI/NLP) at Awin: Build and maintain data pipelines, collaborate with data scientists, and contribute to project planning.
Awin Global
Data Engineer (AI/NLP) at Awin: Build data pipelines, collaborate with teams, and drive business decisions with AI-powered insights.
Heetch
Join Heetch as a Product Data Scientist and drive business growth with data-driven decisions
Udacity
Design and develop data solutions using Apache Spark, Scala, Airflow, Postgres, and Redshift on AWS
Presto
Data Engineer at Flipster: Leverage data to drive business decisions and propel crypto innovation.
Nuna
Design and implement scalable data platforms for value-based care at Nuna. Work on production-hardened systems, mentor junior engineers, and collaborate with a team to drive innovation in healthcare technology.
Wealthfront
Design and maintain core datasets at Wealthfront, collaborate with cross-functional teams, ensure high-quality data for impactful decision-making, and contribute to building a best-in-class data infrastructure.
Goodnotes
Data engineer position at Goodnotes, building data pipelines and analytics infrastructure with expertise in ETL, ELT, distributed systems, and big data solutions.
Kueski
Data Engineer en Kueski: Diseña soluciones sólidas y escalables basadas en datos para respaldar las necesidades a largo plazo de los consumidores de datos. Salario competitivo, bienestar y tiempo de trabajo flexible.
Workiva
Design and develop scalable data pipelines using Airbyte, Kafka, and Snowflake. Build modular transformations with dbt and optimize data serving layers for high-performance data products. Collaborate with cross-functional teams and drive innovation in data engineering technologies at Workiva.
Trafilea
Build and maintain data pipelines using Apache Airflow and AWS technologies at Trafilea. Collaborate with architects to ensure best practices in data engineering while contributing to the company's growth through scalable solutions.
Zencore
Join Zencore as a Data Engineer to leverage your expertise in Google Cloud and cloud-native tools to help companies transform their data infrastructure. Work with cutting-edge technologies like BigQuery, Snowflake, Spark, Hadoop, and Apache Airflow while collaborating with innovative teams. Enjoy competitive compensation and remote work flexibility.
Coursera
Join Coursera's Data Engineering team as a Senior Data Engineer and shape the future of data-driven decision-making.
Swapcard
Senior Data Engineer for AI-powered event platform with 5+ years of experience in ETL, transformation pipelines, and workflow orchestrators.
Sanity io
Data Engineer at Sanity.io: Design scalable ETL/ELT pipelines, collaborate with teams, and establish best practices for data ingestion and transformation.
Twilio
Join Twilio as a Software Engineer Graduate to work on real-time communications, distributed systems, and scalable infrastructure solutions. Utilize your skills in networking, security, programming languages like Python and Java, and databases to contribute to high-impact projects while enjoying benefits such as competitive pay and flexible remote work options.
ElevenLabs
Music data labeler at ElevenLabs, enhancing machine learning models with expertise in music theory and structure.
As a Data Scientist at Reddit, you'll work on ads targeting, relevance modeling, auction optimization, measurement, and user experience. Leverage data to drive advertiser value and collaborate with cross-functional teams to build impactful solutions.
Databricks
We are seeking a Specialist Solutions Architect (SSA) - Data Engineering to guide customers in building big data solutions on Databricks. As a customer-facing role, you will work with Solution Architects and provide technical leadership to successful implementations of big data projects. You will have 5+ years of experience in a technical role with expertise in at least one area such as software engineering, data engineering, or data applications engineering. Your responsibilities will include architecting production-level data pipelines, providing tutorials and training, and contributing to the Databricks Community. We offer flexible remote work options, $4,000/year travel stipends, and equity in a fast-growing company. If you are fluent in English and Spanish, we encourage you to apply for this role.
FEQ425R190
This role can be remote and fluent-proficiency in English and Spanish is required.
As a Specialist Solutions Architect (SSA) - Data Engineering, you will guide customers in building big data solutions on Databricks that span a large variety of use cases. You will be in a customer-facing role, working with and supporting Solution Architects, that requires hands-on production experience with Apache Spark™ and expertise in other data technologies. SSAs help customers through design and successful implementation of essential workloads while aligning their technical roadmap for expanding the usage of the Databricks Data Intelligence Platform. As a deep go-to-expert reporting to the Specialist Field Engineering Manager, you will continue to strengthen your technical skills through mentorship, learning, and internal training programs and establish yourself in an area of specialty - whether that be streaming, performance tuning, industry expertise, or more.
The impact you will have:
Provide technical leadership to guide strategic customers to successful implementations on big data projects, ranging from architectural design to data engineering to model deployment
Architect production level data pipelines, including end-to-end pipeline load performance testing and optimization
Become a technical expert in an area such as data lake technology, big data streaming, or big data ingestion and workflows
Assist Solution Architects with more advanced aspects of the technical sale including custom proof of concept content, estimating workload sizing, and custom architectures
Provide tutorials and training to improve community adoption (including hackathons and conference presentations)
Contribute to the Databricks Community
What we look for:
Fluent-proficiency in English and Portuguese/Spanish
5+ years experience in a technical role with expertise in at least one of the following:
Software Engineering/Data Engineering: data ingestion, streaming technologies - such as Spark Streaming and Kafka, performance tuning, troubleshooting, and debugging Spark or other big data solutions
Data Applications Engineering: Build use cases that use data - such as risk modeling, fraud detection, customer life-time value
Extensive experience building big data pipelines
Experience maintaining and extending production data systems to evolve with complex needs
Deep Specialty Expertise in at least one of the following areas:
Experience scaling big data workloads (such as ETL) that are performant and cost-effective
Experience migrating Hadoop workloads to the public cloud - AWS, Azure, or GCP
Experience with large scale data ingestion pipelines and data migrations - including CDC and streaming ingestion pipelines
Expert with cloud data lake technologies - such as Delta and Delta Live
Bachelor's degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience
Production programming experience in SQL and Python, Scala, or Java
2 years of professional experience with Big Data technologies (Ex: Spark, Hadoop, Kafka) and architectures
2 years of customer-facing experience in a pre-sales or post-sales role
Can meet expectations for technical training and role-specific outcomes within 6 months of hire
Can travel up to 30% when needed