At Tetricus, our mission is to leverage advanced technology to transform mental health care. If you are passionate about making a meaningful impact in the lives of millions through the intersection of healthcare and technology, we would love to hear from you.
Role Description:
We’re looking for a full-time Senior Data Engineer to join our team, focusing on enhancing and scaling our machine learning operations, backend services, and database management. This role is pivotal in building and maintaining the infrastructure that supports a wide range of functionalities including high-throughput APIs, cloud deployments across multiple regions, machine learning model training and deployment pipelines, and sophisticated data transformations and analytics. The ideal candidate will possess deep expertise in data engineering principles and a broad experience in backend development, cloud services, and machine learning data pipelines. They will be expected to dive deep into technical challenges, devise innovative solutions, engage in discussions with the team, and take full ownership of developing those solutions from conception to deployment.
What You'll Do:
- Design, architect, and take ownership of our data engineering infrastructure, including data pipelines, databases, machine learning model deployment, and integration of APIs to support both internal and external data flows from web and iOS to database to ML solution.
- Work closely with other engineers, data scientists, and non-technical team members to build the most effective systems for managing our data and machine learning models, ensuring high availability, performance, and scalability.
- Develop solutions that have a significant impact on our organization’s efficiency and our user’s health outcomes, making data-driven decision-making more accessible across the company.
- Simplify complex data challenges for non-technical stakeholders, ensuring clarity and accessibility of data insights.
- Collaborate with healthcare providers to create seamless integrations between their systems and our data infrastructure, attending meetings and directly working with their technical teams to ensure compatibility and performance.
- Produce comprehensive documentation to support the development, maintenance, and usage of the data engineering systems.
- Implement and maintain robust logging, monitoring, and error handling mechanisms to ensure system reliability and quick recovery from failures.
- Communicate effectively about development progress, timelines, expectations, and any potential roadblocks with the team (We’re primarily based on the East Coast).
What You Bring:
- Bachelor’s degree or higher in Computer Science, Computer Engineering, a related field, or equivalent experience through bootcamps.
- 6+ years of relevant experience in data engineering with a proven track record of developing, deploying, and maintaining scalable and reliable data systems.
- Expertise in Python and SQL, with a strong background in building and managing REST APIs, data pipelines, and machine learning pipelines in a cloud environment.
- Our infra is built on Google Cloud Platform with FastAPI for our backends, Postgresql and BigQuery to store our data, and DBT to manage our data pipelines.
- Machine learning is leveraging Vertex AI.
- Experience with data modeling, ETL processes, and working with big data technologies.
- Excellent problem-solving, communication, and teamwork skills, with the ability to explain complex data concepts in simple terms.
Nice to Haves:
- Familiarity with machine learning frameworks and the ability to work closely with data scientists to deploy and scale ML models.
- Experience in health tech, electronic health records, or similar fields is highly desirable.
What We’re Looking For:
- At least 5+ years of relevant experience in a quantitative field, including experience building, evaluating, and deploying ML models
- Deep understanding of machine learning methodologies, particularly as they relate to recommendation systems or reinforcement learning
- Practical experience building, validating, and deploying machine learning models at scale
- Hands-on experience with machine learning libraries and frameworks such as TensorFlow, PyTorch, Scikit-learn, etc
- Proficiency in Python and other object-oriented programming languages
- Knowledge of state-of-the-art techniques in machine learning, AI, and data analytics, as well as a strong desire to continue learning and adapting to new technologies
- Understanding of regulations and best practices in handling sensitive healthcare data
- Bonus: Familiarity with large language models (LLMs), generative AI is a plus, but not required
- Bonus: Familiarity with large-scale data processing tools (e.g., Hadoop, SparkSQL) and experience handling large datasets
What We Offer:
- A passionate, tight-knit team committed to making a significant impact on patient health outcomes.
- Direct involvement in projects that improve psychiatric healthcare delivery and patient care.
- A comprehensive benefits package, including health, dental, and vision coverage. Tetricus contributes 90% of package cost.
- Unlimited PTO and flexible working hours.
- A dynamic environment with a wide range of technologies and projects to expand your skills and knowledge.
- Opportunities for occasional travel for onsite meetings and quarterly company retreats.
- Potential for equity.
This position reports directly to the Head of Engineering.
The pay range for this position at the start of employment is expected to be between $160,000 - $180,000/year. However, the base pay offered may vary depending on multiple factors, including job-related knowledge, skills, experience, and market factors.