Many have already touched on this, but you hit the nail on the head with the third paragraph. Always smart to prepare but any attempt to use this to reduce workers will go horribly. Saving isn’t crazy in this regard but wouldn’t plan on it being long term until LLMs can become less expensive, have better reasoning and most importantly have at all better performance on longer context windows without impact on performance. These aren’t easy solves, they brush up on fundamentals limits of the tech
Not a scientist but the article seems to mean that they checked that the tools themselves had no defects giving incorrect measurements.
This comment seems to be questioning the methodology of how we measure the rate of expansion so tackles a different aspect of the conversation.
But that’s about as much as I can contribute haha