Can LLMs Steer Autonomous Driving Into the Mainstream?
Large language models like ChatGPT show promise for improving autonomous driving systems by better handling complex edge cases and reasoning about scenarios, but challenges around latency, real-time constraints, and potential inaccuracies need to be addressed.
Summary
- Autonomous driving currently relies on specialized sensors, mapping, and many narrow AI models which is complex, expensive and doesn't handle new scenarios well.
- Large language models can reason about complex driving scenarios better than legacy algorithms, offering a path to handle the "long tail" of rare events.
- However, LLMs today are too slow for real-time driving decisions and can sometimes "hallucinate" incorrect responses.
- Solutions exist through model optimization and hybrid cloud computing to run parts in the car and parts in the data center.
- Better data quality and reinforcement learning can also help align LLMs with appropriate responses.
- LLMs may still miss some driving concepts that would require proprietary driving data to embed.
- With solutions to current limitations, LLMs could enable autonomous driving to reach mainstream by providing the safety and scalability needed.