Top Benefits
About the role
Our algorithms process millions of data points to reduce urban congestion and improve safety in real-time. We are a company that focuses on mobility and solves problems with cutting-edge machine learning and computer vision algorithms. We have market leading products in vehicle occupancy detection (VOD), Automatic Incident Detection (AID) and camera analytics for smart cities. If you like tackling interesting and challenging multidisciplinary problems, keep reading.
As a Senior Systems Engineer, you will work within our Vehicle Occupancy Detection (VOD) products and contribute to enhancing our market leading solutions. We are looking for candidates withstrong software and engineering skills, comfortable aroundsystem integration, perception, statistics and data analytics. At Invision AI we strive to write clean and production-level code, embrace code review and teamwork. You will be a key participant at this stage of our startup and be responsible for making cool stuff happen.
Our culture is open and collaborative. We listen. We need smart and creative people, who love new initiatives and improve and learn from their mistakes. If this appeals to you, we would love to hear from you!
Requirements
Scope of Responsibilities
You will be part of a team that works on our tech stack in our software architecture, data integration and analysis pipelines. This is a hands-on role where you'll see your code move in the physical world—expect to spend time in the field testing on actual vehicles.
-
Own the end-to-end integration of perception algorithms into production-ready edge devices
-
Design, develop, and maintain features and algorithms for our products
-
Write clear and maintainable code. Participate in code reviews, and help maintain codebase quality
-
Support deployments and help keep them up to date and functioning
-
Mentor junior engineers and drive architectural decisions for our VOD sensor fusion pipelines
-
Participate in daily standups, sprint plannings and retrospectives
-
Conduct field tests
-
Write technical documentation
Skills Required
-
Strong programming experience in at least two of the following languages: Elixir, Go, C++, Python
-
Experience with embedded systems, Linux, communication protocols and networking
-
Software productization experience (CI, unit and integration tests, monitoring and alerting)
-
4+ years' experience deploying solutions to customers
-
Code version control systems – git
-
Candidate must have a high standard for delivering and achieving quality
-
Independent and quick learner
-
Strong communication and presentation skills
-
Experience working with images, lidar and point cloud data are a plus
Beneficial
-
Knowledge in statistics, linear algebra, estimation theory and computer vision
-
Previous exposure to visualization and graphics rendering tools (e.g. meshlab, Unreal Engine, CARLA or equivalents)
-
Experience with real-time middleware frameworks, multi-threading and multi-processing
-
Data and cloud tools (AWS, dvc)
-
Hands on with mechanical and electrical tools, and driving vehicles
Benefits
-
Competitive total compensation including meaningful equity participation
-
Comprehensive health and dental coverage to support you and your family
-
Company-matched RRSP plan to help you invest in your future
-
Four weeks of paid vacation to rest, recharge, and explore
-
Opportunity to work on market leading technology and grow as we scale
About Invision AI
Massive amounts of data being created at the edge, uploading this data to the cloud for real-time analysis is neither financially nor technically feasible. Extracting meaning from this data is computationally intensive and expensive, limiting adoption of edge AI to a few high-value applications.
After years of research by leaders in computer vision and machine learning we have developed technology that is two to three orders of magnitude more efficient than other high-accuracy deep learning systems. Our patent pending technology evolves the state of the art with a combination of optimization, sprinkled with unique insight and just a little magic.
Video is our first application: the system is trained in the cloud by ingesting video where objects and activities of interest have been annotated. Learning is distilled into ultra-efficient software that runs on fixed, body-mounted and vehicle-mounted cameras. We can enable analytics on most professional IP video cameras with just a firmware update. Older cameras can be retrofitted with a low-power embedded processor, such as those from our partners NXP and Ambarella. Our software-only system gets smarter over time: edge cases (where the system is not very confident that it knows what is happening) are captured, uploaded to the cloud for analysis, and the learning distilled into ever smarter software.
We work with more than just video: infrared, thermal, radar and lidar play important complimentary roles in helping to establish accurate, detailed situational awareness across a wide variety of environmental conditions. Each sensor detects, classifies and tracks multiple objects locally in real-time. This metadata is then push to the cloud, or a central ECU in the case of a vehicle, where it is fused to establish ground truth, detect behavior, track objects across multiple sensors, and identify anomalies.
Similar jobs you might like
Top Benefits
About the role
Our algorithms process millions of data points to reduce urban congestion and improve safety in real-time. We are a company that focuses on mobility and solves problems with cutting-edge machine learning and computer vision algorithms. We have market leading products in vehicle occupancy detection (VOD), Automatic Incident Detection (AID) and camera analytics for smart cities. If you like tackling interesting and challenging multidisciplinary problems, keep reading.
As a Senior Systems Engineer, you will work within our Vehicle Occupancy Detection (VOD) products and contribute to enhancing our market leading solutions. We are looking for candidates withstrong software and engineering skills, comfortable aroundsystem integration, perception, statistics and data analytics. At Invision AI we strive to write clean and production-level code, embrace code review and teamwork. You will be a key participant at this stage of our startup and be responsible for making cool stuff happen.
Our culture is open and collaborative. We listen. We need smart and creative people, who love new initiatives and improve and learn from their mistakes. If this appeals to you, we would love to hear from you!
Requirements
Scope of Responsibilities
You will be part of a team that works on our tech stack in our software architecture, data integration and analysis pipelines. This is a hands-on role where you'll see your code move in the physical world—expect to spend time in the field testing on actual vehicles.
-
Own the end-to-end integration of perception algorithms into production-ready edge devices
-
Design, develop, and maintain features and algorithms for our products
-
Write clear and maintainable code. Participate in code reviews, and help maintain codebase quality
-
Support deployments and help keep them up to date and functioning
-
Mentor junior engineers and drive architectural decisions for our VOD sensor fusion pipelines
-
Participate in daily standups, sprint plannings and retrospectives
-
Conduct field tests
-
Write technical documentation
Skills Required
-
Strong programming experience in at least two of the following languages: Elixir, Go, C++, Python
-
Experience with embedded systems, Linux, communication protocols and networking
-
Software productization experience (CI, unit and integration tests, monitoring and alerting)
-
4+ years' experience deploying solutions to customers
-
Code version control systems – git
-
Candidate must have a high standard for delivering and achieving quality
-
Independent and quick learner
-
Strong communication and presentation skills
-
Experience working with images, lidar and point cloud data are a plus
Beneficial
-
Knowledge in statistics, linear algebra, estimation theory and computer vision
-
Previous exposure to visualization and graphics rendering tools (e.g. meshlab, Unreal Engine, CARLA or equivalents)
-
Experience with real-time middleware frameworks, multi-threading and multi-processing
-
Data and cloud tools (AWS, dvc)
-
Hands on with mechanical and electrical tools, and driving vehicles
Benefits
-
Competitive total compensation including meaningful equity participation
-
Comprehensive health and dental coverage to support you and your family
-
Company-matched RRSP plan to help you invest in your future
-
Four weeks of paid vacation to rest, recharge, and explore
-
Opportunity to work on market leading technology and grow as we scale
About Invision AI
Massive amounts of data being created at the edge, uploading this data to the cloud for real-time analysis is neither financially nor technically feasible. Extracting meaning from this data is computationally intensive and expensive, limiting adoption of edge AI to a few high-value applications.
After years of research by leaders in computer vision and machine learning we have developed technology that is two to three orders of magnitude more efficient than other high-accuracy deep learning systems. Our patent pending technology evolves the state of the art with a combination of optimization, sprinkled with unique insight and just a little magic.
Video is our first application: the system is trained in the cloud by ingesting video where objects and activities of interest have been annotated. Learning is distilled into ultra-efficient software that runs on fixed, body-mounted and vehicle-mounted cameras. We can enable analytics on most professional IP video cameras with just a firmware update. Older cameras can be retrofitted with a low-power embedded processor, such as those from our partners NXP and Ambarella. Our software-only system gets smarter over time: edge cases (where the system is not very confident that it knows what is happening) are captured, uploaded to the cloud for analysis, and the learning distilled into ever smarter software.
We work with more than just video: infrared, thermal, radar and lidar play important complimentary roles in helping to establish accurate, detailed situational awareness across a wide variety of environmental conditions. Each sensor detects, classifies and tracks multiple objects locally in real-time. This metadata is then push to the cloud, or a central ECU in the case of a vehicle, where it is fused to establish ground truth, detect behavior, track objects across multiple sensors, and identify anomalies.