rbt
Senior Member
Issue is, say one streetcar sends a packet, but then you have hundreds of buses and streetcars sending and downloading packets for the updated list and soon enough you find that the data requirements start to overload the processing and bandwidth that they'd ideally like to run these on (I assume based on an educated guess).
On a day-to-day basis I manage a platform that peaks at roughly 100,000 events per second, fully transactionally and interactive (equivalent to creditcard charge processing during retail sales checkout). This is low end, I've done consulting work on systems approaching 800,000 events per second (nearly 3 billion events per hour).
Presto isn't dealing with anywhere close to 1 billion riders per hour (transfer + tap-out would make 3 billion taps). They'll be dealing with a peak load closer to 2000 events per second system-wide and it doesn't need to be real-time; a 2 minute lag getting results on the website is perfectly acceptable. That's a pretty trivial load with even discount server hardware.
It would be well under 1MB/sec system wide for peak load (500 bytes per record is a huge amount of space; should be closer to 50 bytes per record to identify customer, charge type, $ amount, device, and GPS coordinates).
That kind of thing was a huge challenge in the late 90's when London and Hong Kong were starting their card's implementations; much less of a problem by the time they launched (yay EVDO networks). Now it's very straight forward to handle with off-the-shelf hardware and software for the difficult bits; both for the network and the central backend. They just need to write the glue.
The hardest part for Metrolinx/TTC is going to be inter-departmental politics; connecting and letting the Presto device use the in-bus network to communicate to the Presto backend.
Last edited: