• Vaneet Randhawa

Meet the team: Robert de Temple

Updated: Apr 12


Robert de Temple recently joined us as our new principal director for perception software & deep learning. Before joining the team, Robert was the primary technical advisor for AI and computer vision technologies at Jaguar Land Rover and ASTech — a software development house within the Audi/Volkswagen group.

Robert brings to the team a wealth of experience in the fields of AI, algorithms, deep learning and self-supervised learning, having built deep learning perception stacks on multiple sensor modalities at Jaguar Land Rover and Astech. We’ve been particularly impressed with his artistic flair which allows him to glance at data and create the most efficient solutions which outperform any traditional approach.

In this article, we enjoy getting to know Robert some more and hearing his thoughts on the industry and the opportunities ahead.


How did you get into this line of work?

My family is actually made up of doctors, so I guess that makes me the odd one out. But we have always had a knack for technology and STEM subjects and I have always wanted to be an inventor for as long as I can remember. So, when it came to university, I chose a general engineering degree called mechatronics which lets you try mechanical engineering, electrical engineering and related parts of computer science. I settled on electrical engineering and embedded systems early in my studies.

I have always been interested in AI, but at the time it was in deep winter and there wasn’t an awful lot of exciting things happening. Then, near the end of my studies, I enrolled in a couple of computer vision related subjects as extracurricular exams. Coming from Germany, I very quickly found a job in the automotive industry, developing active safety systems and autonomous driving functions. The potential of deep learning became obvious in 2012–2013 after a leading computer vision benchmark competition was won using deep learning. At this time I started to dive head first into the research publications around this technology and started applying it to my work — first purely out of personal interest. A few hundred papers later I was given the unique opportunity to become the lead engineer for perception and deep learning at my company. I have continued on this path through my future stages in my career.


What do you like most about working in this industry?

Though autonomous driving is a very important application, the technology behind the systems is even more exciting. Working with AI really makes you feel like you are making the future happen. Whatever we develop will have seemed like science fiction just a few years ago and that in itself is so exciting to me that I could not imagine doing anything else.

Have you had any role models or anyone you admired in the industry?

I have always found other engineers really inspiring, particularly those that have had interesting career trajectories. But I think my main role model is someone who I’ve only actually met for a very short amount of time. My great grandfather, who died when I was still a toddler, was an extremely inspiring man. Whilst I still have a few small memories of him, I really got to know him when I translated his diaries which were written over 100 years ago. He is a mysterious part of our family history because he was a master clock smith, a master optician, he could speak several languages fluently, and he was an expert in electrical and mechanical engineering.

He was the kind of person who always tinkered and was never afraid to learn something new, and that really made a big impression with me. At one point, he was the oldest master clock smith still working in Germany, and he joined the master optician school in his mid-70s. He ended up working as a master optician for another 20 years. I find that kind of attitude very moving.


Thinking about AI as a whole, what do you think are the biggest challenges that have been overcome?

I feel like we are still at the beginning. We have never really had a good definition of what AI actually is because we don’t really understand what intelligence itself actually means.

But the next challenge I see is how to bring the high data and resource requirements down. For the good of the industry, it cannot be that only big multi-billion-dollar companies have the ability to design effective computer vision systems for their autonomous applications, whilst everyone else is stuck with a solution that may or may not work with the applications they have in mind and they have no means to really change it.

However, I think there has been a lot of open source spirit growing in the sector in the last few decades with the AI community collaborating and sharing their libraries. This has led to accelerated development and benefitted everyone in the industry.

Additionally, advances in computation and sensors has been rapid, especially in the lidar space. Consequently, this has started a whole avalanche of exciting development projects and applications across the industry.


Looking ahead, having recently joined the business, what do you think is the biggest opportunity for Cron AI now?

Traditionally, cameras have been the most important sensing modality for any application for many years because of the great advances in the algorithms which are available to it. But now we have a similar situation whereby very high-resolution lidar sensors are becoming cheap enough, highly resolving and reliable enough to be useful for a lot of applications.

However, the algorithmic side is not as mature as on the camera side because the data has not been available. And now a whole new class of customers and applications don’t have the expertise to make sense of lidar data. All of these customers desperately need a software solution that computes objects and other higher level, actionable, information from the data that comes from these sensors, but there is currently nothing on the market which can fill this gap. This presents a great opportunity for us to present our solution and, I believe, help power the next generation of autonomous applications.

39 views0 comments