Have you ever asked yourself when writing a piece of code if it could perform faster? The answer is most likely: yes. Programming is not only about writing functional blocks. Speed and code efficiency are directly connected to user satisfaction, even if you do not see it right away on your screen. Most frameworks allow you to iterate through collections like its nothing. Until… The collections meet each other.

An iteration is nothing more than an algorithm running itself for a couple (or dozens) of time. The speed of this algorithm depends on the size of the collection (the amount you run through an iteration), which can be pretty relative. In order to keep track of this speed there’s something called the Big O Notation. It’s kind of like the Big Mac Index, but then without the burger.

Big O

The Big O Notation basically tells how long an algorithm takes to run. It can be used for any programming language and is a way to use math to keep track of… Math. Big O keeps track of how quick the runtime grows. We cannot assume the tested algorithm is always used on the same hardware, therefore we cannot trust runtime speed itself. The growth of runtime however is the same for each set of hardware. The growth of the runtime is measured in ‘n’ which can be described as the size of the collection or input. As ‘n’ gets larger and larger, we could expect the execution to be slower. For simple inputs, lets say we are iterating one collection, we use O(n). Where n is relative to the input. For iterating a collection within a collection we use O(n^2). N is squared because of its nested property. Iterating a collection within a collection within a collection? As you may have guessed: O(n^3). Sounds cool right?

Why use it?

Big O could be hard to understand at the start due to its abstract and mathematical notation. But the more you get used to it, the more you will use it without thinking about it. When it becomes part of your programming routine, your code will definitely benefit from it. How will your code benefit from a mathematical analysis? Well, it makes you think about how fast the code runs, why it runs at that speed and if you could do it better. Surely your code will be happy to get some more attention.

Examples

```private static void iterateSimpleCollectionSample(int[] numbers){
for(int i=0; i< numbers.Length; i++){
Console.WriteLine("Number: "+ numbers[i]);
}
}
private static void iterateSimpleCollectionSample2(int[] numbers){
foreach(var number in numbers)
{
Console.WriteLine("Number: "+ number);
}
}

private static void iterateSimpleCollectionSample3(int[] numbers){
foreach(var number in numbers)
{
Console.WriteLine("Number: "+ number);
}
for(int i=0; i<numbers.Length; i++){
Console.WriteLine("Number: "+ numbers[i]);
}
int n =0;
while(n<numbers.Length){
Console.WriteLine("Number: " + numbers[n]);
n++;
}
}```

All three methods above use O(n). The third method which uses three different iterating methods is described as O(3n) which is basically O(n) because it’s still linear. So far; not really that exciting. How about some nested loops?

```private static void nestedIterationSample1(int[] numbers){
List<int> reversed = new List<int>();
for(var j = numbers.Length-1; j > 0; j--){
}
foreach(var number in numbers)
{
Console.WriteLine("Number: "+ number);
foreach(var reversedNumber in reversed)
{
Console.WriteLine("Product: " +  number * reversedNumber);
}
}
}```

That looks more like a situation you might encounter. Because this method uses one nested iteration it runs in O(n^2). If the array of integers contain 10 numbers it will loop 10*10 times. Imagine if the list contains a thousand numbers! And most importantly; what if a third loop is to be added in the nested loop?

Big O is not always the right way to keep track of runtime growth. For instance: What if we’re iterating through a list of people and we’re looking for ‘Mike’. It is possible Mike is the first one in the list, or the last. If Mike would be the first in the list the algorithm would run at O(1), if not it will run at O(n). In these cases it is not entirely sure how fast the runtime will grow, so it is best to consider O(n) as the runtime growth.

```private static void findPersonInList(string[] people){
for(int i = 0; i<people.Length; i++){
if(people[i] == "Mike"){
Console.WriteLine("Found him at " + i);
}
}
}
```

Conclusion

While developing Mavention Workspace we continuously think about code efficiency and performance. Lots of data requires lots of data manipulation and iterations. Big O can be usefull to help understand how efficient iterations are.  It’s better to have O(n) than O(n^x). Therefore optimizing functionality could help speeding up performance. Instead of a double iteration it’s better to split them apart to create a O(n) friendly environment. And if that’s not possible; atleast you thought about code optimization!

Vul je zoekopdracht in. 