We are just starting the self-driving car era. It’s a logical follow-on to having GPS and always-connected vehicles, but we are still in the early days of evolution. Even so, it’s a fair bet that a decade from now, most, if not all, vehicles will have self-driving capability.
What isn’t clear is what it will look like. Getting from point A to point B is easy enough (GPS), and avoiding hitting anything else seems to be in the bag, too. What isn’t figured is how to stop those awful traffic jams. I live in Los Angeles and a 3-hour commute Friday afternoon is commonplace. In fact, Angelinos typically spend between 6 and 20 hours a week in their cars, with the engine running, gas being guzzled and their tempers being frayed!
It’s particularly true in LA that each car usually has a single occupant, so that’s a lot of gas, metal and pavement space for a small payload. What this leads us to is the idea of
- Automating car control and centralizing routing. This would allow, via a cloud app, load-balancing the roads and routing around slowdowns
- Making the vehicles single or dual seater electric mini-cars
- Using the Mini-cars to pack more effective lanes and move cars closer together
We are talking big impacts here. The size change alone should at least triple the commute capacity of the city road system. It would also improve power usage by a huge factor. We’d see around 3X reduction for the smaller vehicle itself, with around another 2X from avoiding start-stop mode.
All of this works only if the central “brain” works on near-real-time information. This brain, likely an AI, has to know detailed flow data, as well as hazard information, so that traffic can be routed away from a wreck with minimal delay. This isn’t a small scale correction process, either. A wreck will ripple outward from its origin as rerouted traffic impinges on smoothly-flowing, but possibly near-capacity flows on other roads, sort of like the near-neighbor problem in computer clusters.
All of this smartness will impact the way we design road systems. The new focus will be on resilience to bottlenecks rather than distance to destination, which means more cross-linking and path redundancy. With automated trucks, delivery patterns will move outside of the commute periods altogether, freeing up even more space. All of this, though, will come with a lot of trial and error.
So is there an analogy to how we will manage computer clusters in the next decade? The problems and potential solutions actually look much the same. We have network overload looking much like choked roads. Payloads are bloated by out-of-date protocols that transfer a minimum 4KB instead of just the few altered bytes – the large car. We have slow-downs and wrecks in hardware, apps and networks and, in a cluster, we have alternate paths and spare gear – an “Uber” will get you home!
In the commute model, a wreck on the 405 freeway will quickly spread across the whole system. Containment of the problem and initiating rerouting has to begin in just a few minutes. Computer clusters don’t usually have the luxury of such a slow recovery. Depending on the resiliency of the app, stalling a single thread can take down whole job streams in just seconds as interdependencies start to build up. Just as insidiously, a bottlenecked thread, caused by app error, CPU problems or network or storage issues, can delay job completion for a whole chunk of threads. We have to design for this. Failures will happen!
The answer is to metricate the system with real-time “sensors” that can spot issues as soon as they occur. A good solution is somewhat open-ended, insofar as we are still in a learning mode on this approach. This allows OEM vendors and ISVs to add to the ecosystem over time, building robustness of operation. (This is something I’d recommend the traffic system programmers to do, too)
The gathered data can feed a mix of tools, including traditional apps, Big Data analytics code and an AI engine. Again, these tools are open-ended and extensible, to enrich the overall ecosystem.