Summary: The Navigate360 suite of software, curriculum, and services supports K-12 schools and large institutions ensure the safety and well-being of millions across the US, Europe and Oceana.
- More than 20,000 schools with nearly 12 million students in all 50 states trust Navigate360 solutions to ensure their students’ safety and well-being
- 5,000+ public safety agencies use Navigate360 training software and curriculum
- 18.9M+ individuals are covered by Navigate360 threat preparedness and response solutions
Our applications collect a vast amount of data on safety threats, preparedness drills, site maps and response plans, visitor logs, threat and behavioral cases and many more. Our Data Science and Solutions team is building the next generation of data integrations for K-12 schools, analytics, and AI applications to bring all this data to life to support Navigate360’s mission of “zero incidents’ in our schools, colleges, and other public spaces. Come help us build the future of AI-powered software with one of the fastest growing and data-driven SaaS solutions for education.
At Navigate360, we are an AWS and Databricks shop – our data management, security, and privacy are of the utmost importance to our future mission. This role will play a critical role in building the infrastructure that will power our analytics and AI applications while keeping personal data protected and secure. The successful candidate will have 3+ years of proven experience building data pipelines, designing and architecting lakehouse artifacts, and shipping API’s to enable a variety of use cases for data integration, consumption via business intelligence tools and software, machine and AI model training.
If selected, you will work with our software engineering teams to understand and ingest data from our live-site applications into our data lakehouse, implementing data contracts and ensuring high quality change data capture, and moving that data through the ‘medallion architecture’ for delivery to customers via on-screen analytics, direct integration with their data infrastructure, and by powering AI applications and agents. You will partner with data scientists and analysts to shape the data into features and metrics that can be used in a variety of ways. You will be a creative thinker, with a growth mindset, self-motivated, with a proven ability to work remotely and productively collaborate with a diverse team across US time zones.
Duties / Responsibilities:
- Responsible for creation and maintenance of high-quality business critical datasets in AWS (S3, DynamoDB) and Databricks Lakehouse and SQL layer for certain consumption scenarios.
- Build, maintain, scale, and optimize data ETL processes which enable analytics and insights for customers and internal consumption via Databricks APIs.
- Design, code, unit test, and deploy data processes for ingestion, transformation, or curation of data while keeping costs under control and ensuring data security and privacy via AWS and Databricks catalog.
- Design and build training and inference pipelines for text, image, and agentic frameworks for machine learning and AI applications including
- Explore, evaluate, and experiment on new data sources for application use cases as they become available.
- Create reliable automated data solutions based on the identification, collection, and evaluation of business requirements.
- Collaborate with other software engineers and data scientists/analysts to integrate data into in-product dashboards, reporting, machine learning services and AI applications.
- As part of a team, operate a data ecosystem for at-rest and streaming data that is hyper secure with high performance and extreme attention to data quality.