Senior Software Engineer - Data Tools at Discord
San Francisco, CA, US

Discord is one of few places that combine who you play games with and how you play those games. We use this data to make Discord the best place to play games with your friends.

Join our Data Tools team to build solutions on top of our data platform that empower our Analytics & Data Science teams. Help us build world-class data tools that break any all crafting level records. Your crafting skill is over 9000 right?

 

Discord is a small group of passionate gamers whose mission is to bring people together around games. Diversity and inclusiveness are a critical part of how we get there. We believe that with diversity comes a better product, better decisions, and a better work environment. Everyone here is committed to making Discord representative of the world we want to live and play in.

 

What you'll be doing

Level up our Analytics & Data Science teams by improving their workflows and helping them interface with our data platform.

Work with our Engineering & Infrastructure teams to define, scope and implement robust interfaces for ingestion, data processing, workflow management and everything in between.

Help us take our ability to visualize, consume and act upon data to the next level.

Build tools & automation on top of our data stack using technologies such as React, Javascript, Python, Go, and Bash.

What you should have

Minimum of 4 years experience building scalable backend systems. 

Experience working with and managing varied forms of distributed data systems such as Kafka, Storm or Spark. 

Experience shipping products & internal tools in Python, Go, Scala/Java, or other similar languages. 

Comfort and confidence working on a variety of tasks ranging from backend infrastructure and systems programming to full stack projects.

Self-motivation and the ability to take a high-level goal and deliver shippable code. 

Bonus Points

Proven track record of working with petabyte-scale data infrastructure. Discord has plenty of it.

Experience working with varied data applications and databases, such as Hadoop, BigQuery, Spark or Redshift.

Expert knowledge of SQL, MapReduce and/or statistics scripting tools (i.e. R).

BS/MS/PhD in Computer Science, Applied Mathematics or a related field.

Godlike crafting skills.