Currently, mainstream E2E models rely on massive datasets to directly output driving control commands, but their decision-making process is difficult to interpret and prone to failure in rare or complex scenarios. Traditional E2E models process raw images through deep learning architectures to produce control signals at high speed, but with opaque logic and poor performance in unfamiliar situations. In contrast, the VLA model introduces a reasoning mechanism from language models, allowing it to correlate perception inputs, infer causal relationships, and overcome the “black box” issue. With its built-in knowledge base and stronger generalization capability, VLA is better suited to dynamic and unpredictable urban driving environments.

At the launch event, DeepRoute.ai showcased four core functions of the VLA model. Spatial semantic comprehensionenables the system to predict potential risks in blind spots—such as blocked views by buses, complex intersections, or bridges/tunnels—and proactively slow down or apply defensive driving strategies, something traditional E2E models struggle to achieve. Irregular obstacle detection can accurately identify non-standard objects such as construction cones and overloaded small trucks, outperforming conventional geometric or contour-based approaches. Traffic sign text recognition precisely interprets markings like reversible lanes and bus-only lanes, addressing the limitations of pure image recognition under low resolution or complex lighting. Memory-based voice control supports natural language interaction and continuous learning, optimizing performance based on driver habits, whereas most existing in-car voice systems still rely on rigid, command-style operation.

In terms of mass-production readiness, the DeepRoute IO 2.0 platform follows a “multi-modal + multi-chip + multi-vehicle” design philosophy, supporting both LiDAR-equipped and pure vision configurations. This flexibility allows rapid customization for different automakers, outperforming many ADAS solutions limited to a single sensing approach. DeepRoute.ai has already secured five production programs based on this platform, with the first batch of vehicles set to hit the market soon. The company has delivered nearly 100,000 production vehicles equipped with urban navigation assistance systems, covering SUVs, MPVs, and off-road models, and has accumulated more than ten model-specific programs. This production record gives DeepRoute.ai a stronger market position than competitors still stuck in pilot phases.

According to CEO Zhou Guang, “100,000 units is just the beginning. The advanced driver-assistance market is opening up rapidly, and core technology capabilities will be the decisive factor.” Going forward, DeepRoute.ai plans to accelerate deployment of its VLA model in passenger vehicles while expanding its Robotaxi business on mass-production platforms, gradually building a Road AGI framework. The company’s future developments are worth close attention.