Remote Source

    Senior Data Engineer ( Experience in Python, Snowflake and DBT)

    Hyderabad
    Full-Time
    Senior (7+ yrs)
    Engineering & Development
    Posted on January 20, 2026
    About the Team
    Data is at the core of Outreach's strategy. It drives our product and our customers to the highest levels of success. We use it to derive customer health scores and revenue dashboards, monitor operational metrics of our AWS infrastructure, increase product engagement and user productivity through natural language understanding, and power predictive insights and causal inference via experimentation. As our customer base continues to grow, we are looking towards new ways of leveraging our data to deeper understand our customers’ needs and deliver new products and features to help continuously improve their customer engagement workflows.
    The Business Systems Data Engineering team is on a mission to bring relevant data to every person at Outreach while advancing the use of data and promoting an insight-centric culture. We are a centralized resource providing a suite of services to Outreach business functions across Finance, Analytics, and Support systems, with data as the critical component to a seamless experience.
    Our Vision of You:
  1. Data is your passion, from schema design to understanding data pipelines to data validation and optimization. You love everything about data and live in this world.
  2. You strive to become an expert in analytics toolchains and solutions (such as Databricks, Python, Spark) and understand their nuances as you optimize schemas to take advantage of their unique features.
  3. You are meticulous about data and data-pipeline quality and obsessive about building solutions that verify data correctness at every step of the pipeline
  4. Qualifications:
  5. 7+ years of Data Engineering experience.
  6. Support strategic planning by working with Executives and Senior Managers, Principle Engineers, and cross-team Data Engineers.
  7. Good understanding of RDBMS and NoSQL data stores and when to use each
  8. Experience with OLAP and OLTP type datastores including schema design and query optimization
  9. Design, build, and launch collections of sophisticated data models and visualizations that support multiple use cases across different products or domains
  10. Experience with distributed bus systems (Kafka, RabbitMQ)
  11. Strong skills in at least one programming language languages (Python highly preferred).
  12. Experience with Spark Ecosystem (Delta Lake, Databricks, etc.)
  13. Optimize pipelines, dashboards, frameworks, and systems to facilitate easier development of data artifacts
  14. Strong foundation in data modeling, schema design, and data quality best practices, with functional experience working on cloud platforms like Snowflake or Databricks.
  15. Expertise in building multi-step ETL jobs through tooling like dbt; proficiency with workflow management platforms like Airflow and version control management tools through GitHub.
  16. Strong communication and collaboration skills, with a focus on clarity, empathy, and shared ownership.
  17. You embody our core values: we are hungry craftspeople, we have grit, we are honest, we take ownership, we have each other’s back no matter what, we’re one with our customers, and we find strength in diversity and inclusion
  18. Apply for this position

    Company:  Outreach

    Provides AI-powered sales engagement platform that automates workflows and frees sellers to focus on strategic conversations.
    1001-5000 employees
    Software & IT Services
    HQ: United States