- Without using any tools,
- like a profiler (e.g. gprof) to see how much time is spent where,
- nor any other tools like "valgrind + cachegrind" to see how many operations are performed in either of the two functions,
- And also ignoring all compiler optimizations i.e. compiling with
-O0
,
- And assuming whatever else there is in the two functions (what you represent as
/* all required things */
), is trivial,
Then all one can say, just by looking at both your functions is, that both of your functions have a complexity of O(n)
, since both your functions are spending most of the time in the two for
loops. Depending on how big the size of the matrices is, especially if they are really large, everything else in the code is pretty much insignificant when it comes to down speed.
So, what your question boils down to, in my opinion is,
- In how much time it takes,
- to call the constructor of
C
- plus returning this
C
,
versus,
- How much time it takes,
- to call the
resize
function for C
,
- plus calling the copy constructor of
C
.
This you can 'crudely but relatively quickly' measure using the std::clock()
or chrono
as shown here in multiple answers.
#include <chrono>
auto t_start = std::chrono::high_resolution_clock::now();
matrix D = A+B; // To compare replace on 2nd run with this ---> matrix D; Add(A,B,D);
auto t_end = std::chrono::high_resolution_clock::now();
double elaspedTimeMs = std::chrono::duration<double, std::milli>(t_end-t_start).count();
Although once again, in my honest opinion if your matrices are big, most of the time would go in the for
loop.
p.s. Premature optimization is the root of all evil.