Senior Software Engineer, Perception State Estimation
Job Type
Full Time
Salary
$171k - $235k
Skills
Python
C++
Summary
Mission/Vision: Latitude AI develops automated driving technology to make travel safer, less stressful, and more enjoyable, particularly for Ford vehicles. The developer’s role focuses on advancing cutting-edge perception and state estimation technologies.
Key Responsibilities: Key projects include developing machine learning models, Bayesian algorithms, and deep learning models for multi-object tracking, state estimation, and road feature estimation. Other tasks involve transitioning these solutions from lab to road and collaborating with cross-disciplinary experts.
Growth Opportunities: Opportunities abound for continuous learning through the latest research, professional development reimbursements, and working alongside leading experts in a startup-like environment.
Description
Latitude AI () is an automated driving technology company developing a hands-free, eyes-off driver assist system for next-generation Ford vehicles at scale. We’re driven by the opportunity to reimagine what it’s like to drive and make travel safer, less stressful, and more enjoyable for everyone.
When you join the Latitude team, you’ll work alongside leading experts across machine learning and robotics, cloud platforms, mapping, sensors and compute systems, test operations, systems and safety engineering – all dedicated to making a real, positive impact on the driving experience for millions of people.
As a Ford Motor Company subsidiary, we operate independently to develop automated driving technology at the speed of a technology startup. Latitude is headquartered in Pittsburgh with engineering centers in Dearborn, Mich., and Palo Alto, Calif.
Meet the team:
The State Estimation team is a group of highly skilled and experienced professionals who specialize in cutting-edge multi-object tracking, scene estimation, and machine learning technology. Together, we collaborate to create advanced Bayesian filters, graph models, and deep learning models that are capable of temporally tracking both static and dynamic actors as well as estimating road features. The State Estimation team is the interface of the perception system to various downstream autonomy consumers including motion planning, prediction, and localization.
The team's primary focus is on developing compute-efficient models and systems that can perform a wide range of tasks such as closed world multi-object tracking, track-to-detection data association, object motion forecasting, uncertainty estimation, lane line temporal smoothing, perception based road shape generation, and open world scene tracking and state estimation. The ultimate goal is to take these algorithms from the lab to the road, ensuring that they are optimized for onboard performance and able to function as production-grade perception systems on vehicles.
To achieve this goal, the team constantly stays up-to-date with the latest research literature and pushes the boundaries of what is possible. We are dedicated to developing cutting-edge tracking algorithms, ML algorithms, and models that can help vehicles reason about the world around them in real-time.
What you’ll do:
Develop machine learning models and Bayesian algorithms for multi-object tracking, state estimation, and uncertainty estimation
Develop deep learning models with temporal fusion for scene state estimation including occupancy, visibility, motion (velocity), and uncertainty
Develop estimation algorithms for road feature estimation such as lane lines and speed limit as well as estimating the road shape
Read literature, analyze raw data, and design state-of-the-art solutions
Transition solutions from the lab to the test track and public roads to ensure successful production-level implementation
Collaborate with perception experts and experienced roboticist on algorithm design, prototyping, testing, deployment, and productization
Build and maintain industry-leading software practices and principles
Develop clean and efficient software for perception modules interfacing with other key modules
Show initiative and be a valued team member in a fast-paced, innovative, and entrepreneurial environment
What you'll need to succeed:
Bachelor's degree in Computer Engineering, Computer Science, Electrical Engineering, Robotics or a related field and 4+ years of relevant experience (or Master's degree and 2+ years of relevant experience, or PhD)
Relevant knowledge and experience in machine learning, with a proven track record of developing and deploying deep learning solutions using PyTorch, Tensorflow, or similar frameworks
Experience in developing multi-object tracking systems using classical algorithms or machine learning algorithms
Strong experience in deep learning, Bayesian filtering, and optimization algorithms
Proven experience in shipping perception software products to industry or consumers
At least 4 years of development experience in Python/C++ environment
Nice to have:
Ph.D. with machine learning focus, or equivalent experience
Experience developing and deploying temporal computer vision models (for instance, video activity recognition or using multiple frames for improved results on other tasks)
What we offer you:
Competitive compensation packages
High-quality individual and family medical, dental, and vision insurance
Health savings account with available employer match
Employer-matched 401(k) retirement plan with immediate vesting
Employer-paid group term life insurance and the option to elect voluntary life insurance
Paid parental leave
Paid medical leave
Unlimited vacation
15 paid holidays
Complimentary daily lunches, beverages, and snacks for onsite employees
Pre-tax spending accounts for healthcare and dependent care expenses
Pre-tax commuter benefits
Monthly wellness stipend
Adoption/Surrogacy support program
Backup child and elder care program
Professional development reimbursement
Employee assistance program
Discounted programs that include legal services, identity theft protection, pet insurance, and more
Company and team bonding outlets: employee resource groups, quarterly team activity stipend, and wellness initiatives
Learn more about Latitude’s team, mission and career opportunities at !
The expected base salary range for this full-time position in California is $170,560-$234,520 USD. Actual starting pay will be based on job-related factors, including exact work location, experience, relevant training and education, and skill level. Latitude employees are also eligible to participate in Latitude’s annual bonus programs, equity compensation, and generous Company benefits program, subject to eligibility requirements.
Candidates for positions with Latitude AI must be legally authorized to work in the United States on a permanent basis. Verification of employment eligibility will be required at the time of hire. Visa sponsorship is available for this position.
We are an Equal Opportunity Employer committed to a culturally diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, color, age, sex, national origin, sexual orientation, gender identity, disability status or protected veteran status.