
Data Cloud Engineer (DataOps)
Trafilea
- Location
- Uruguay
- Posted
Design and maintain scalable data platforms in the cloud for Trafilea's e-commerce businesses.
Trafilea
Design and maintain scalable data platforms in the cloud for Trafilea's e-commerce businesses.
Mapbox
Data Engineer III for in-vehicle navigation applications with experience in AWS technologies, SQL, and modern programming languages
Welocalize
Data Operations Engineer at Welocalize: Develop and maintain robust ETL processes, scalable data pipelines, and reliable data transformations for BI and analytics purposes.
Podium
As a Data Ops Engineer at Podium, you'll ensure data systems are fast, reliable, and scalable while supporting cutting-edge AI projects. You'll work with Elasticsearch, Kafka, RabbitMQ, Redis, and PostgreSQL to optimize performance and automate infrastructure using Terraform and Ansible.
Canonical
Data Governance Engineer at Canonical: Develop data governance policies and processes for compliance with internal policies and regulatory frameworks.
G-P
IT Cloud Services Administrator - Manage SaaS applications, adopt cloud-first strategy, troubleshoot incidents
Udacity
Design and develop data solutions using Apache Spark, Scala, Airflow, Postgres, and Redshift on AWS
Nuna
Design and implement scalable data platforms for value-based care at Nuna. Work on production-hardened systems, mentor junior engineers, and collaborate with a team to drive innovation in healthcare technology.
Wealthfront
Design and maintain core datasets at Wealthfront, collaborate with cross-functional teams, ensure high-quality data for impactful decision-making, and contribute to building a best-in-class data infrastructure.
Goodnotes
Data engineer position at Goodnotes, building data pipelines and analytics infrastructure with expertise in ETL, ELT, distributed systems, and big data solutions.
Kueski
Data Engineer en Kueski: Diseña soluciones sólidas y escalables basadas en datos para respaldar las necesidades a largo plazo de los consumidores de datos. Salario competitivo, bienestar y tiempo de trabajo flexible.
Workiva
Design and develop scalable data pipelines using Airbyte, Kafka, and Snowflake. Build modular transformations with dbt and optimize data serving layers for high-performance data products. Collaborate with cross-functional teams and drive innovation in data engineering technologies at Workiva.
Trafilea
Build and maintain data pipelines using Apache Airflow and AWS technologies at Trafilea. Collaborate with architects to ensure best practices in data engineering while contributing to the company's growth through scalable solutions.
Zencore
Join Zencore as a Data Engineer to leverage your expertise in Google Cloud and cloud-native tools to help companies transform their data infrastructure. Work with cutting-edge technologies like BigQuery, Snowflake, Spark, Hadoop, and Apache Airflow while collaborating with innovative teams. Enjoy competitive compensation and remote work flexibility.
Coursera
Join Coursera's Data Engineering team as a Senior Data Engineer and shape the future of data-driven decision-making.
Swapcard
Senior Data Engineer for AI-powered event platform with 5+ years of experience in ETL, transformation pipelines, and workflow orchestrators.
Sanity io
Data Engineer at Sanity.io: Design scalable ETL/ELT pipelines, collaborate with teams, and establish best practices for data ingestion and transformation.
Level AI
Lead the design and development of Level AI's data warehouse and analytics platform, collaborate on scalable systems, mentor junior engineers, and drive engineering excellence.
HiveMQ
Senior QA Engineer at HiveMQ Cloud: Collaborate with backend & frontend engineers on cloud-managed services & products
Awin Global
Data Engineer (AI/NLP) at Awin: Build data pipelines, collaborate with teams, and drive business decisions with AI-powered insights.
Trafilea
Trafilea is a dynamic tech e-commerce group that operates multiple direct-to-consumer brands in the intimate apparel and beauty sectors. The company fosters a culture of collaboration, innovation, and continuous learning, with a remote-first approach that allows employees to work from anywhere in the world. As a Data Cloud Engineer at Trafilea, you will design and maintain scalable data platforms in the cloud, ensuring high availability, performance, and reliability of our data infrastructure. You will work closely with various teams to enable real-time and batch data processing, ensure data security and compliance, and optimize cloud resources for large-scale analytics and machine learning workloads. With a focus on investing in people and providing development opportunities, Trafilea offers a safe space for employees to grow professionally and personally. The company has been recognized as one of the Top 25 Companies for Remote Workers by Forbes and FlexJobs.
About Trafilea
Trafilea is a dynamic and innovative Tech E-commerce Group that operates multiple direct-to-consumer brands in the intimate apparel and beauty sectors, with a focus on using data-driven strategies to scale their businesses. In addition to our products, we have our own online community dedicated to promoting body positivity. As a rapidly growing global player, Trafilea is committed to creating high-quality products and services that enhance the customer experience and drive long-term growth.
At Trafilea, we foster a culture of collaboration, innovation, and continuous learning. We believe in investing in our people and providing them with the support and development opportunities they need to grow both personally and professionally. With our remote-first approach, you'll have the freedom to work from anywhere in the world, surrounded by a diverse and talented team that spans the globe.
🌟 Role Mission
The Data Cloud Engineer is responsible for designing and maintaining scalable, secure, and efficient data platforms in the cloud. You will work closely with Data Engineers, DevOps, and Analytics teams to ensure high availability, performance, and reliability of our data infrastructure.
Your mission is to enable real-time and batch data processing, ensure data security and compliance, and optimize cloud resources to support large-scale analytics and machine learning workloads.
🛠️ Responsibilities
🔹 Cloud Data Infrastructure & Optimization
Design and implement scalable data architectures in AWS (Redshift, Athena, Glue, S3, DynamoDB).
Optimize data storage, processing, and retrieval strategies for performance and cost efficiency.
Implement data lake and data warehouse solutions to support analytics and BI teams.
🔹 Security & Compliance
Ensure data security, encryption, and access control policies are in place.
Implement GDPR, SOC 2, and compliance best practices for data governance.
Monitor and prevent data breaches, unauthorized access, and performance issues.
🔹 Observability & Performance Monitoring
Implement monitoring and alerting solutions for data pipelines (Datadog, Prometheus).
Troubleshoot performance bottlenecks and improve query execution times.
🔹 Collaboration & Knowledge Sharing
Work closely with Data Scientists, BI Analysts, and DevOps teams to align data solutions with business needs.
Document data workflows, best practices, and infrastructure standards.
Provide guidance on data engineering best practices and mentor junior engineers.
What we offer
Collaborate with world-class talents in a data-driven, dynamic, energetic work environment.
Opportunity to grow and develop both professionally and personally.
Safe space to be who you truly are, with a commitment to diversity, equity, and inclusion.
Openness to new ideas and initiatives.
Great benefits package including remote work, 15 working days of paid holidays, Learning subsidy, and more!
We've been recognized by Forbes and FlexJobs as one of the Top 25 Companies for Remote Workers. Apply now!
🎓 Qualifications
✅ Must-Have Skills
✔️ 3+ years of experience as a Data Cloud Engineer, Data Engineer, or Cloud Engineer.
✔️ Strong experience with AWS cloud services for data processing (Redshift, Athena, Glue, S3, DynamoDB, Kinesis).
✔️ Expertise in SQL and distributed databases (PostgreSQL, Snowflake, BigQuery, ClickHouse).
✔️ Hands-on experience with ETL/ELT pipelines, automation, and orchestration tools (Airflow, Step Functions, dbt).
✔️ Strong knowledge of Infrastructure as Code (Terraform, CloudFormation) for data infrastructure.
✔️ Experience working with large-scale data processing frameworks (Apache Spark, Flink, EMR).
✔️ Proficiency in Python, Bash, or similar scripting languages for automation.
✅ Nice-to-Have Skills
➕ Experience with streaming data architectures (Kafka, Kinesis, Apache Flink, RabbitMQ).
➕ Knowledge of machine learning pipelines and feature stores.
➕ Experience with Data Governance, Data Quality, and Security Best Practices.
➕ Understanding of FinOps principles for cost optimization in cloud data solutions.