Senior Software Engineer - Machine Learning Infrastructure at Discord
San Francisco, CA, US

Be one of the founding members of our data team and develop Discord's next gen platform. Help us build scalable distributed systems like you'd construct additional pylons in your base. You build those in your sleep, amirite?


Discord is a small group of passionate gamers whose mission is to bring people together around games. Diversity and inclusiveness are a critical part of how we get there. We believe that with diversity comes a better product, better decisions, and a better work environment. Everyone here is committed to making Discord representative of the world we want to live and play in.


What you'll be doing

You'll be one of the first members of our Data team. Build the vision and help define the architecture of our new Data platform

Help us build a robust and scalable foundation for data, from logging to streaming pipelines, ETL & data consumers. 

Own and operate our entire data stack using modern technologies such as Apache Beam, Pub/Sub and Airflow.

Work closely with our engineers, data scientists and analysts to optimize our data pipelines and establish best practices for table schemas, data models and data storage. 

Build efficient solutions on top of GCP and AWS using Python, Scala and Go. 

What you should have

Minimum of 4 years experience building scalable backend systems. 

Experience working on, and deploying, large-scale systems in Python or Go, Scala/Java, or other similar languages. 

Experience working with and managing varied forms of distributed data systems such as Kafka, Stor or Spark. 

Love to work with high volume heterogenous data and distributed systems.

Self-motivation and the ability to take a high-level goal and deliver shippable code. 

Bonus Points

Proven track record of working with terabyte or petabyte-scale data infrastructure.

Love to work with high volume heterogeneous data and distributed systems.

Experience working with varied data applications and databases, such as Hadoop, Druid, Spark or Redshift.

Expert knowledge of SQL, MapReduce and/or statistics scripting tools (i.e. R).

BS/MS/PhD in Computer Science, Applied Mathematics or a related field.

Godlike pylon construction skills.