
ML/LLM Ops Intern
RackspaceJob Location
Job Summary
We are seeking an ML Ops / LLM Ops Intern to support our AI product development team. This role offers hands-on experience in deploying and maintaining machine learning models, optimizing model performance, and ensuring smooth AI operations. You'll work with the latest cloud-based ML tools, automation pipelines, and model serving frameworks. Experience with large language models (LLMs) and integrating applications with AI agents is highly desirable. As a member of our team, you will collaborate on deploying and monitoring machine learning models in production environments, developing and maintaining CI/CD pipelines for ML model training and inference, optimizing infrastructure and model performance for scalability and efficiency. You'll also work with cloud services (AWS, GCP, or Azure) to manage ML workflows and troubleshoot and improve model serving, logging, and monitoring systems. Our team values curiosity, ownership, and a drive to improve, and we offer flexible remote work options, $4,000/year travel stipends, and equity in a fast-growing company. If you're passionate about AI and want to gain hands-on experience with real-world AI production systems, apply now.
Key Responsibilities
- Assist in deploying and monitoring machine learning models in production environments.
- Develop and maintain CI/CD pipelines for ML model training and inference.
- Optimize infrastructure and model performance for scalability and efficiency.
- Work with cloud services (AWS, GCP, or Azure) to manage ML workflows.
- Troubleshoot and improve model serving, logging, and monitoring systems.
- Collaborate on integrating applications with AI agents and LLMs.
Qualifications
- Final-year student or recent graduate in Computer Science, Engineering, or a related field.
- Basic understanding of ML model deployment, Docker, Kubernetes, and cloud services.
- Experience with Python, TensorFlow/PyTorch, and MLflow is a plus.
- Familiarity with automation tools and scripting (Bash, Terraform, etc.).
- Experience working with LLMs and AI agent integration is preferred.
- Strong preference for candidates with experience in OpenAI and Azure environments.
- Full-time availability
Why Join Us?
- Work on cutting-edge AI deployment and scaling challenges.
- Gain hands-on experience with real-world AI production systems.
- Mentorship from experienced AI engineers.
- Opportunity for a full-time position upon successful completion.