Many are thinking how to spend less tokens, few about how to scale the usage in useful ways. What happens when 100s of deep research docs are processed in parallel, and all results aggregated? Or when a 100 competing apps to solve a single problem are being created at the same