So I was working on a proof that could maybe prove a specific algorithm to be the fastest algorithm for multiplying matrices, and I think I may be onto something, but am not quite sure.
Here is my proof:
The equation that is used to determine the amount of operations needed to be done for a specific problem could be represented as On = Pn + n. On, in this case, represents the total amount of operations done through out the process, P represents the total steps needed to complete the problem(which varies upon the complexity of the problem) and n represents the first step of the process needed to be solved. With matrices, problems are very specific with order of elements of the certain matrix and the order of which the matrices are multiplied also is a determining factor of the problem, so there is no broad algorithm that could be used to get the correct answer because of the fragility of the problem. Now, with On = Pn + n, to process the problem it requires that P increases, but n must stay the same because it represents the first step of the problem, which means that n = 1. Now, on the first step of the problem, P = 0 because no process has been set to be solved until the next step, but is still a step of the process, which means that On = 1. In that case, this process only requires that each specific operation of the problem be done once specific to the step it must take to reach the solution of the problem. Therefore, On, the amount of operations required, is equal to Pn, the step of the problem at which the problem reached to get to the solution, multiplied by the starting step, which is equal to 1 since it is the first step of the problem, plus n because it required step 1 be completed first in order to reach the next step involving the first step, and so on.
That is my proof right now. I can modify it, but I just wanted people's opinions. It probably isn't a valid proof, but it took time to process and fix contradictions.