Senior Site Reliability Engineer - Storage Platform

Company:  NVIDIA
Location: Santa Clara
Closing Date: 08/11/2024
Salary: £150 - £200 Per Annum
Hours: Full Time
Type: Permanent
Job Requirements / Description

Site Reliability Engineering (SRE) is an engineering discipline that involves designing, building, and maintaining large-scale production systems with high efficiency and availability. It encompasses various areas, including software and systems engineering practices, storage, data management, and services. SRE professionals are highly specialized and possess expertise in different domains such as systems, networking, storage, coding, database management, capacity management, continuous delivery, and deployment, as well as open-source cloud-enabling technologies like Kubernetes, containers, and virtualization. Their responsibilities encompass ensuring reliable storage solutions, managing data efficiently, and providing related services to support the overall stability and performance of the production systems.

SRE at NVIDIA ensures that our internal and external facing GPU cloud services have reliability and uptime as promised to the users and at the same time enables developers to make changes to the existing system through careful preparation and planning while keeping an eye on capacity, latency, and performance. SRE is also a mindset and a set of engineering approaches to running better production systems and optimizations. Much of our software development focuses on eliminating manual work through automation, performance tuning, and growing the efficiency of production systems. As SREs are responsible for the big picture of how our systems relate to each other, we use a breadth of tools and approaches to tackle a broad spectrum of problems. Practices such as limiting time spent on reactive operational work, blameless postmortems, and proactive identification of potential outages factor into iterative improvement that is key to product quality and interesting and dynamic day-to-day work. SRE's culture of diversity, intellectual curiosity, problem-solving, and openness is important to its success. Our organization brings together people with a wide variety of backgrounds, experiences, and perspectives. We encourage them to collaborate, think big, and take risks in a blame-free environment. We promote self-direction to work on meaningful projects while striving to build an environment that provides the support and mentorship needed to learn and grow.

What You Will Be Doing:

  1. Assist in the design, implementation, and support of large-scale storage clusters, including monitoring, logging, and alerting.
  2. Work with AI/ML workloads to capture and correlate behavior in large clusters and workflows, which are otherwise hard to understand.
  3. Work closely with peers on the team to improve the lifecycle of services – from inception and design, through deployment, operation, and refinement.
  4. Support services before they go live through activities such as system design consulting, developing software and frameworks, capacity management, and launch reviews.
  5. Maintain services once they are live by measuring and monitoring availability, latency, and overall system health, including leveraging machine learning models.
  6. Scale systems sustainably through mechanisms like AI/ML and automation, and evolve systems by pushing for changes that improve reliability and velocity.
  7. Practice sustainable incident response and blameless postmortems.
  8. Be part of an on-call rotation to support production systems.

What We Need To See:

  1. BS degree in Computer Science or related technical field involving coding (e.g., physics or mathematics) or equivalent experience.
  2. At least 5+ years practical experience.
  3. Experience with algorithms, data structures, complexity analysis, software design, and maintaining large-scale Linux-based systems.
  4. Experience in one or more of the following: C/C++, Java, Python, Go, Perl or Ruby, AI/ML frameworks and methodologies.
  5. Good knowledge of infrastructure configuration management tools like Ansible, Chef, Puppet, and Terraform.
  6. Experience in using observability and tracing-related tools like InfluxDB, Prometheus, and Elastic stack.

Ways to stand out from the crowd:

  1. Demonstrated experience in having SRE mindset, customer-first approach, and focus on customer satisfaction and passion for ensuring customer success. Experience with Git, code review, pipelines, and CI/CD.
  2. Interest in crafting, analyzing, and fixing large-scale distributed systems. Strong debugging skills with a systematic problem-solving approach to identify complex problems.
  3. Thrive in collaborative environments and enjoy working with various teams. Experience in using or running large private and public cloud systems based on Kubernetes, OpenStack, and Docker. Flexible in adapting to different working styles.

NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and talented people on the planet working for us. If you're creative and autonomous, we want to hear from you!

The base salary range is 148,000 USD - 276,000 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.

You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

#J-18808-Ljbffr
Apply Now
An error has occurred. This application may no longer respond until reloaded. Reload 🗙