Iterating on BFS

Hello, internet. I am back after a short break wherein my mind was fully and thoroughly exhausted by the concepts of my Algorithms course. But, there is a light at the end of that tunnel and we’re on the downhill side of things.

So, with that in mind, I wanted to post a bit about this weeks material. If you’ve perused the site, you’ve seen that I have a rudimentary implementation of Breadth-first search in my portfolio. This weeks concepts kick that version of BFS up a notch by including a mechanism for tracking vertices we’ve already visited during the search where ‘White’ is undiscovered, ‘Grey’ is discovered and ‘Black’ is discovered with all neighbors explored as well as tracking the nodes depth. See the graphic below.

You’re probably thinking, “So what?” well I was too. But there are some pretty interesting things we can do with just this constant amount of additional space in each node. For starters, we can build a relationship tree, even on cyclical graphs. We can also compute distance from one node to another. And once we can compute distance, we can find shortest paths as well. All of which have applications to real life problems. Neat!

Al Gore Rhythms

Its been a while! Sorry but I just don’t have the extra drive or head-space to really do any regurgitation or insightful recaps on what I am learning this semester. I barely found the time to post this delightful meme!

In any case, Algorithms is living up to its reputation of being a dream killer of a class. I am holding on to my A, but its taking a lot of work and study. I won’t even get started on the proofs. Just, no. I’ll post something a bit more detailed when I come up for air. See you in the spring!

2-4 Trees

Binary search trees have been the topic of conversation for our recent modules and of all of the trees we’ve studied (BST, Red-Black, AVL, etc) I have to say that 2-4 trees are probably the easiest to understand. That opinion may be skewed by the fact that it was also the last one we touched on, so understanding the others may have made it seem simpler. Anyway…

There are some pretty distinct differences in these multi-way search trees. Most notably each node stores multiple values (3 in this specific implementation) and the trees grow upward toward the root, rather than down toward new leaf levels. This design choice is intended to keep the total height of the tree as small as possible.

Rather than memorizing patterns of how to ‘swap’ nodes or re-color them as with RB and AVL trees, 2-4 trees use simple value comparison to decide where they go, so they bear a much closer resemblance to simple binary search trees. When a node is full, and a value passed into it, you just take the middle value and pass it up toward the root, splitting the lesser and greater value into new nodes and adding the value that was originally passed up to whichever side it evaluates to (less than or greater than the middle value). Here’s a graphic to visualize what I’m describing.

The Adventures of Link(s)

Or, my poor attempt at humor about linked lists.

Alright, new topic. We’re covering a completely new data structure called Linked Lists. I have, surprisingly, never been exposed to these even in my self taught years as a programmer. I have to say, I really appreciate the simplicity of this structure. Take any class, add a field or two and BAM, you’re done. Image incoming.

Image from the of University of San Francisco

So what are the advantages? The biggest is dynamic memory allocation. So, if that’s something you’re after this is a structure to consider. Also, its quite simple to modify the order or insert new elements, all you do is update the references to next in the new element and the element left of where its being inserted (in the singly linked example above).

Tradeoffs? No random access. This is a bigger deal than it seems at first. If you can’t access the Nth element of a structure, you can’t divide it for binary search, or perform a lot of other algorithms on it. Also, specifically speaking about Java here, if you incorrectly order your operations for insertion, you can flag your entire structure for garbage collection (yikes). So its really important to set the next field on your new object (the ‘head’ or ‘front’ variable reference keeps the remainder on the heap), THEN link the left object to the new object, completing the link.

The version pictured above is a singly linked list, though its pretty trivial to add another field called ‘prev’ and link it in the other direction for bi-directional access to elements in the structure. There are a ton of other flavors of the linked structure, too.

Sorting out Sorts

I wanted to drop some information here to “rubber duck” some of the things we are learning in Data Structures this semester. I’m going to post a bit of code, then talk about it. Then post some more, ad nauseam.

Note: the code I’m posting is from lecture, I am not claiming it to be my own.

public void mergeSort(Comparable[] a, int left, int right) {
    if (right == left) return;

    int mid = left + (right – left) / 2;

    mergeSort(a, left, mid);
    mergeSort(a, mid + 1, right);

    merge(a, left, mid, right);
}

Lets start with the above code. This is merge sort. Its one of the cooler sorts we’ve come across thus far (Having O(NlogN) time in all cases is pretty spiffy). What’s not cool is trying to wrap your head around it!

So, lets get to the first point of frustration. This is a recursive function. Moreover, its a recursive function that branches, making it even more challenging to try and follow. The best way I’ve found to visualize it is similar to binary search, except we aren’t throwing half of the set away; instead we divide the problem in half recursively until it becomes trivial to sort (i.e. there’s only 1 item left in the set!) Next up, we do the merging.

public void merge(Comparable[] a, int left, int mid, int right) {
    for (int k = left; k <= right; k++) aux[k] = a[k];

    int i = left; j = mid + 1;
    for (int k = left; k <= right; k++) {
        if (i > mid) a[k] = aux[j++];
        else if (j > right) a[k] = aux[i++];
        else if (less(aux[j], aux[i]) a[k] = aux[j++];
        else a[k] = aux[i++];
    }
}

Alright, so remember that at our current recursion step, we only have 1 element to work with, so we combine it with the next single element in order. Its worth noting that we are copying from an auxiliary set of the data we are sorting (this is merge sorts primary downfall, double memory usage). As we traverse back to the top of our recursion, the sorted arrays get larger and larger until, finally, we end up with a sorted array of the original data.

init();

Greetings. My name is Jesse Wade and this site showcases examples of my work. I am a student of Computer Science at Auburn University and have been a student of life for (much) longer! A few other lives I have lived were as a Firefighter and Paramedic in the U.S. Army, an IT customer support technician, and as both System and Network Administrators for various government organizations. My pool of experience is pretty wide, and deep enough I’d say!

Over my career in technology as paradigms shift (as they tend to do) and software defined everything, I began to have a keen interest in writing code. It began with scripting and automation tasks in Windows Server with Powershell and would blossom into Python and BASH when I began working with Linux. These days I spend a lot of time looking at Java, xml, json, Python and a bit of Perl as I maintain our systems and software.

Being self taught was fine, but I soon realized there were a lot of pieces missing for me, so I sought out a CS program and landed on Auburn. I am loving the learning I am doing with them and can’t wait to show off some of the skills and theories I have come to understand. My goal is to transition into fulltime software development once I complete the program.