Illya Starikov
We are well aware of Euler’s famous number,
$$
e = 2.71828182845
$$
There are also quite a few ways of deriving Euler’s Number. There’s the Taylor expansion method:
$$
e = \sum _{k=0} ^{\infty} \frac{1}{k!}
$$
There is also the classical limit form:
$$
e = \lim_{n \rightarrow \infty} \left( 1 + \frac{1}{n} \right)^n
$$
Then there is another. Let $R$ be a random number generated between $[0, 1]$, inclusive. Then $e$ is the average of the number of $R$ s it takes to sum greater than $1$.
With more rigor, for uniform $(0,, 1)$ random independent variables $R_1$, $R_2$, $\ldots$, $R_n$,
$$
N = \min \left\{n: \sum_{k=1}^n R_k > 1 \right\}
$$
where
$$e = \text{Average}(n)$$
The proof can be found here, but it’s pretty math-heavy. Instead, an easier method is to write a program to verify for large enough $n$.
n | Sum Solution | Limit Solution | Random Uniform Variable |
---|---|---|---|
1 | 1 | 2 | 2 |
10 | 1.7182818011 | 2.5937424601 | 2.5 |
100 | 1.7182818284 | 2.7048138294 | 2.69 |
1000 | 1.7182818284 | 2.7169239322 | 2.717 |
10000 | 2.7181459268 | 2.7242 | |
100000 | 2.7182682371 | 2.71643 | |
1000000 | 2.7182804690 | 2.71961 | |
10000000 | 2.7182816941 | 2.7182017 | |
100000000 | 2.7182817983 | 2.71818689 | |
1000000000 | 2.7182820520 | 2.718250315 |
Source Code
def e_sum(upper_bound):
if upper_bound < 1:
return 0
return Decimal(1.0) /
Decimal(math.factorial(upper_bound)) + \
Decimal(e_sum(upper_bound - 1))
def e_limit(n):
return Decimal((1 + 1.0 / float(n))**n)
def find_greater_than_one(value=0, attempts=0):
if value <= 1:
return find_greater_than_one(value + random.uniform(0, 1), attempts + 1)
return attempts
Arguable the first (and most successful) problem solver we know of is Evolution. Humans (along with other species) all share a common problem: becoming the best at surviving our environment.
Just as Darwinian finches evolved their beaks for to survive different parts of the Galápagos Islands, we too, evolved to survive different parts of the world. And we can program a computer to do the same.
Evolution inspired a whole generation of problem solving, commonly known as Evolutionary Algorithms (EAs). EAs have been known for solving (or, approximating) solutions to borderline unsolvable problems. And, just as the mechanics of evolution are not that difficult, the mechanics of EAs are just the same.
Today, we will build an Evolutionary Algorithm from the ground-up.
An Introduction
Before we proceed with implementation or an in-depth discussion, first we wish to tackle two questions: what is an Evolutionary Algorithm, what does an Evolutionary Algorithm look like, and what problems can Evolutionary Algorithms solve.
What Is An Evolutionary Algorithm?
An Evolutionary Algorithm is generic, population-based optimization algorithm that generates solution via biological operators. That is quite a mouthful, so let’s break it up.
Population-based. All Evolutionary Algorithms start by creating a population of random individuals. These individuals are just like an individual in nature: there is a genotype (the genes that make up an individual) and a phenotype (the result of the genotype interacting with the environment). In EAs, they would be defined as follows:
- Genotype The representation of the solution.
- Phenotype The solution itself.
Because it’s a little confusing to think of it this way, it’s often better to think about it in terms of a genotype space and a phenotype space.
- Genotype Space The space of all possible combinations of genes.
- Phenotype Space The space of all possible solutions.
Don’t worry if this doesn’t make sense, we’ll touch on it later.
Optimization Algorithm. Evolution is an optimization algorithm. Given an environment, it will try to optimize an individual for that environment with some fitness metric. Evolutionary Algorithms operate the same way.
Given an individual, it will try to optimize it. We do not use a literal environment, but still use a fitness metric. The fitness metric is simply a function that takes in the genotype of the individual, and outputs a value that is proportional to how good a solution is.
Because fitness metrics are proportional to how good a solution is, this implies a very important condition for our phenotype space: it’s a gradient.
Biological operators. Evolutionary Algorithms are inspired by biology and evolution. Just as biology has operators to generate new individuals, so do Evolutionary Algorithms. More on that later.
Generic. Evolutionary Algorithms are generic. When a framework has been introduced, it can be reused on an individual basis (provided it has the appropriate crossover and mutator operators).
What Does An Evolutionary Algorithm Look Like?
The pseudocode for an Evolutionary Algorithm is one we might expect evolution to have, generate a random population, generate offspring, and let survival of the fittest do its job. And so it does:
BEGIN
INITIALISE population with random solutions
WHILE ( TERMINATION CONDITION is satisfied ) DO
SELECT parents
RECOMBINE pairs of parents
MUTATE the resulting offspring
EVALUATE new candidates
SELECT individuals for the next generation
OD
END
What Problems Can Evolutionary Algorithms Solve?
Evolutionary Algorithms can solve any problem that has a genotype that can fit within our framework:
- The genotype can have a crossover operator.
- The genotype can have a mutator operator.
- The genotype can map to a definite fitness function.
Again, the fitness should be proportional to how good a solution is. If the fitness function $f(x)$ is bounded by $0 \leq f(x) \leq 100$, 0 should be the worst solution or no solution, and $100$ should be the best solution (or vice versa, for inverted fitnesses).
Implementing An Evolutionary Algorithm
We will be implementing a special class of Evolutionary Algorithm, referred to as a (μ + λ)-Evolutionary Strategy. The name is not important, but μ and λ will be; we will come back to them shortly.
For the purposes of our discussion, we are going to consider a sample problem: a deciphering program. The premise of the problem is such.
- There is a string of characters (without spaces) hidden away that, after set, is inaccessible.
- There are two ways to retrieve data about the hidden message:
- Get the length of the string.
- Given a string, the problem will output how many characters match within the two strings.
The secret message would look as follows:
class SecretMessage:
def __init__(self, message):
"""Initialize a Secret Message object.
:message (str): The secret message to hide.
"""
self.__message = message
def letters_match(self, message):
"""Determine how many characters match the secret message.
Note:
The message length and the secret message length must be the same length (accessed via the length property).
:message (str): The message to compare the secret message to.
:returns (int): The number of characters matched.
"""
return sum(self.__message[char] == message[char] for char in range(len(message)))
@property
def length(self):
"""Get the length of the secret message.
:returns (int): The length of the secret message.
"""
return len(self.__message)
Not too complicated.
An Individual
In Evolutionary Algorithms, an individual is simply a candidate solution. An individual has a genotype (the representation) and operators (Crossover, Mutation, and Fitness) that act on the genotype. We will discuss them more extensively below.
The Genotype
As aforementioned, a genotype is the representation of an individual. Just as DNA can for humans, knowing the genotype can give you all the information one might need to determine the characteristics of an individual.
Because a genotype must be acted upon a crossover and mutation operator, there are few common choices for genotypes:
- Vectors[1]. A vector is common because crossover is trivial, take elements from the two genotypes to create a new individual. Mutation is also trivial, pick random elements in vector, and mutate them. Often, for a complex enough individual, a vector of bits is used[2].
- Matrices. Same as a vector, but with multiple dimensions.
- Float-Point or Real Numbers. This one is tricky, but commonly used. There are a plethora of ways to recombine two numbers: average of the two numbers, bit manipulation, binary encoding crossover. Same can be said for mutation: adding a random value to the number, bit manipulation with a random value, or bit flipping in binary encoding. It should be noted that some of these introduce biases, and one should account for them.
- Trees. Some problems can be easily broken down into trees (like an entire programming language can be broken down into a parse tree). Crossover is trivial, swap a random subtree with another. Mutation, however, is often not used; this is because the crossover itself acts as a mutation operator.
Next, our genotype must be initialized to some random values. Our initial population is seeded with said randomly-generated individuals, and with a good distribution, they will cover a large portion of the genotype space.
Keeping all this in mind, let us think about the representation of our problem. A string is nothing more than a vector of characters, so using the first bullet point, we are given our operators pretty easily.
Here’s what our genotype would look like:
class Individual:
message = SecretMessage("")
def __init__(self):
"""Initialize an Individual object.
Note:
Individual.message should be initialized first.
"""
length = Individual.message.length
characters = [choice(ascii_letters) for _ in range(length)]
self.genotype = "".join(characters)
Crossover, Mutation, and Fitness
As aforementioned, to fit within an Evolutionary Algorithm framework, a genotype must be created with crossover, mutator, and fitness operators. Although we have covered said operators, we will formalize them here.
- Crossover. A crossover operator simply takes in two genotypes, and produces a genotype that is a mixture of the two. The crossover can be uniform (random elements from both genotypes are taken), 1-point (take a pivot position between two points, the left half is one genotype, and the right half another), and $N$-point (same as 1-point but with multiple pivot positions).
- Mutator. A mutator operator takes random values within the genotype and changes them to a random values. There is a mutation rate that is associated with all genotypes, we call it $p$. $p$ is bounded such that $0 \leq p \leq 1.0$, where $p$ is the percentage of the genotype that gets mutated. Careful to limit this value, however; too high $p$ can result in just a random search.
- Fitness. The fitness operator simply takes in a genotype, and outputs a numerical value proportional to how good a candidate solution is. Fitness has no limits, and can be inverted (i.e., a smaller fitness is better).
For the purposes of our program, we are going to have the following operators: crossover will be uniform, mutation will be a fixed number of mutating characters, and fitness will be a percentage of the characters matched.
class Individual
...
@property
def fitness(self):
"""Get the fitness of an individual. This is done via a percentage of how many characters
in the genotype match the actual message.
:return (float): The fitness of an individual.
"""
return 100 * Individual.message.letters_match(self.genotype) / Individual.message.length
@staticmethod
def mutate(individual, rate):
"""Mutation operator --- mutate an individual with a specified rate.
This is done via a uniform random mutation, by selecting random genes and swapping them.
Note:
rate should be a floating point number (0.0 < rate < 1.0).
:individual (Individual): The individual to mutate.
:rate (float): The rate at which to mutate the individual's genotype.
"""
number_of_characters_to_mutate = int(rate * individual.message.length)
genotype_list = list(individual.genotype) # Strings are immutable, we have to use a list
for _ in range(number_of_characters_to_mutate):
character_to_mutate = choice(range(individual.message.length))
genotype_list[character_to_mutate] = choice(ascii_letters)
individual.genotype = "".join(genotype_list)
@staticmethod
def recombine(parent_one, parent_two):
"""Recombination operator --- combine two individuals to generate an offspring.
:parent_one (Individual): The first parent.
:parent_two (Individual): The second parent.
:returns (Individual): The combination of the two parents (the offspring).
"""
new_genotype = ""
for gene_one, gene_two in zip(parent_one.genotype, parent_two.genotype):
gene = choice([gene_one, gene_two])
new_genotype += gene
individual = Individual()
individual.genotype = new_genotype
return individual
A Population
Now that we have an Individual, we must create a Population. The Population holds the candidate solutions, creates new offspring, and determine which are to propagate into further generations.
μ And λ
Remember when we mentioned that μ and λ would be important in our Evolutionary Algorithm? Well, now here they come into play. μ And λ are defined as follows:
- μ: The population size.
- λ: The number of offspring to create.
Although these are simple constants, they can have a drastic impact on an Evolutionary Algorithm. For example, a Population size of 1,000 might find a solution in much fewer generations than 100, but will take longer to process. It has been experimentally shown that a good proportion between the two is:
$$
λ / μ \approx 6
$$
However, this is tested for a large class of problems, and a particular Evolutionary Algorithm could benefit from having different proportions.
For our purposes, we will pick $μ = 100$ and $λ = 15$, a proportion just a little over 6.
class Population:
def __init__(self, μ, λ):
"""Initialize a population of individuals.
:μ (int): The population size.
:λ (int): The offspring size.
"""
self.μ, self.λ = μ, λ
self.individuals = [Individual() for _ in range(self.μ)]
Generating Offspring
Generating offspring is trivial with the framework we imposed on an Individual: pick two random parents, perform a crossover between the two to create a child, mutate said child, and introduce the child back into the population pool.
In code, it would look as follows:
class Population:
...
@staticmethod
def random_parents(population):
"""Get two random parents from a population.
:return (Individual, Individual): Two random parents.
"""
split = choice(range(1, population.individuals[0].message.length))
return choice(population.individuals[:split]), choice(population.individuals[split:])
@staticmethod
def generate_offspring(population):
"""Generate offspring from a Population by picking two random parents, recombining them,
mutating the child, and adding it to the offspring. The number of offspring is determine by
λ.
:population (Population): The population to generate the offspring from.
:returns (Population): The offspring (of size λ).
"""
offspring = Population(population.μ, population.λ)
offspring.individual = []
for _ in range(population.λ):
parent_one, parent_two = Population.random_parents(population)
child = Individual.recombine(parent_one, parent_two)
child.mutate(child, 0.15)
offspring.individuals += [child]
return offspring
Survivor Selection
The last core part of an Evolutionary Algorithm would be survival selection. This puts selective pressure on our candidate solutions, and what ultimately leads to fitter solutions.
Survivor selection picks μ Individuals that would be the best to propagate into the next generation; however, it’s not as easy as picking the fittest μ Individuals. Always picking the μ best Individuals leads to premature convergence, a way of saying we “got a good solution, but not the best solution”. The Evolutionary Algorithm simply did not explore the search space enough to find other, fitter solutions.
There are a number of ways to run a survival selection, one of the most popular being $k$ -tournament selection. $k$ -tournament selection picks $k$ random Individuals from the pool, and selects the fittest Individual from the tournament. It does this μ times, to get the full, new Population. The higher $k$, the higher the selective pressure; however, also the higher chance of premature convergence. The lower $k$, the less of a chance of premature convergence, but also the more the Evolutionary Algorithm starts just randomly searching.
At the bounds, $k = 1$ will always be just a random search, and $k = μ$ will always be choosing the best μ individuals.
We choose $k = 25$, giving less fit solutions a chance to win, but still focusing on the more fit solutions.
@staticmethod
def survival_selection(population):
"""Determine from the population what individuals should not be killed. This is done via
k-tournament selection: generate a tournament of k random individuals, pick the fittest
individuals, add it to the survivors, and remove it from the original population.
Note:
The population should be of size μ + λ. The resultant population will be of size μ.
:population (Population): The population to run survival selection on. Must be of size μ + λ.
:returns (Population): The resultant population after killing off unfit individual. Will be
of size μ.
"""
new_population = Population(population.μ, population.λ)
new_population.individuals = []
individuals = deepcopy(population.individuals)
for _ in range(population.μ):
tournament = sample(individuals, 25)
victor = max(tournament, key=lambda individual: individual.fitness)
new_population.individuals += [victor]
individuals.remove(victor)
return new_population
The Evolutionary Algorithm
As with the pseudocode in the introduction, this will exactly resemble our Evolutionary Algorithm. Because the Individual and the Population framework is established, it is almost a copy-paste.
class EA:
def __init__(self, μ, λ):
"""Initialize an EA.
:μ (int): The population size.
:λ (int): The offspring size.
"""
self.μ, self.λ = μ, λ
def search(self):
"""Run the genetic algorithm until the fittness reaches 100%.
:returns: The fittest individual.
"""
generation = 1
self.population = Population(self.μ, self.λ)
while self.population.fittest.fitness < 100.0:
offspring = Population.generate_offspring(self.population)
self.population.individuals += offspring.individuals
self.population = Population.survival_selection(self.population)
generation += 1
Running An Evolutionary Algorithm
Below, we have an instance of the evolutionary algorithm searching for a solution:
Now, looking at the string “FreneticArray”, it has 13 characters, and seeing as there are 26 letters in the alphabet, double that for lowercase/uppercase letters, our search space was:
$$
\left(2 * 26\right)^{13} \approx 2.0 \times 10^{22}
$$
Huge.
On average, our EA 29 generations to finish[3]. As each generations had at most 115 individuals, we can conclude on average we had to generate:
$$
29 * 115 = 3335\, \text{solutions}
$$
Much smaller than $2.0 \times 10^{22}$. That is what Evolutionary Algorithms are good for: turning a large search space into a much smaller one.
Although there are much more advanced topics in Evolutionary Algorithms, this is enough start implementing your own. With just the simple operators listed above, a genotype, a search space that has a gradient, many problems can solved with an Evolutionary Algorithm.
The Source Code
All source code can be found here.
One of the most valuable skills I have learned in the past year is undoubtedly LaTeX — so much so that I have moved almost all of my notetaking and document writing to LaTeX or Pandoc.
However, with almost any markup language, I did run into friction: constantly importing headers and defining commands I used on an every-document basis. Some commands have become muscle memory; when I got compiler errors every time I forgot to redefine them, I became quite frustrated. So I caved: I made a preamble that I pre-appended to all of my LaTeX docs.
\RequirePackage[l2tabu, orthodox]{nag}
\documentclass[12pt]{article}
\usepackage{amssymb,amsmath,verbatim,graphicx,microtype,units,booktabs}
\usepackage[margin=10pt, font=small, labelfont=bf, labelsep=endash]{caption}
\usepackage[colorlinks=true, pdfborder={0 0 0}]{hyperref}
\usepackage[utf8]{inputenc}
\usepackage{xcolor}
\newcommand{\shellcmd}[1]{\texttt{\colorbox{gray!30}{#1}}} % highlight shell or code commands
\newcommand{\todo}[2]{\textbf{\colorbox{red!50}{#1}}{\footnote{#2}}} % placeholder for something to do
\newcommand{\gray}{\rowcolor[gray]{.9}} % format a table row to make it more visible
\usepackage[left=1.15in, right=1.15in]{geometry}
\usepackage{titleps}
\newpagestyle{main}{
\setheadrule{.4pt}
\sethead{\sectiontitle}
{}
{Illya Starikov}
}
\pagestyle{main}
% *-------------------- Modify This As Needed --------------------* %
% \usepackage[top=0.5in, bottom=0.5in, left=0.5in, right=0.5in]{geometry}
% \pagestyle{empty}
\title{\todo{Title}{Insert title here.}}
\date{\today}
\author{Illya Starikov}
Then, I took it one step further. I occasionally make practice tests for students in Calculus and Physics, so I decided to make a command that acts as extensions of my preamble for tests.
\usepackage{lastpage,ifthen}
\newboolean{key}
\setboolean{key}{true} % prints questions and answers
% \setboolean{key}{false} % prints questions only
\newcommand{\question}[2]{\item (#1 Points) #2}
\newcommand{\answer}[1]{\ifthenelse{\boolean{key}}{#1}{}}
% This must be filled out
\newcommand{\points}{0} % total number of points for the exam
\newcommand{\capped}{0\ } % the capping score, to offer a point or two bonus
\newcommand{\difference}{0\ } % the difference of the two
\newcommand{\timeallotted}{0 hour} % duration of the exam
\begin{document}
\section*{\answer{Key}}
Complete the following problems.
In order to receive full credit,
please show all of your work and justify your answers.
You do not need to simplify your answers unless specifically instructed to do so.
You have \timeallotted.
This is a closed-book, closed-notes exam.
No calculators or other electronic aids will be permitted.
If you are caught cheating, you will receive a zero grade for this exam.
Please check that your copy of this exam contains \pageref{LastPage} numbered pages and is correctly stapled.
The sum of the max points for all the questions is \points,
but note that the max exam score will be capped at \capped
(i.e., there is \difference bonus point(s), but you can't score more than 100\%).
Best of luck.
\begin{enumerate}
\question{}{}
\answer{}
\end{enumerate}
\end{document}
Reminiscing on my academic career, there are certainly things I did right; however, there are certainly things I did wrong. Aside from the rights and the wrongs, there are things I had wished I had done sooner. Today, I hope to shed some light on some of the these things.
For context, as of a month ago I am an alumni of Missouri University of Science and Technology, and engineering college in Missouri. It has equipped me very well in mathematics[1], physics[2], and computer science[3]. I wrote roughly 159,141 lines of code in college, so I feel as if I got my money’s worth. Thanks to Missouri S&T, I have a fairly diverse background in many things.
Below is advice I would recommend to anyone currently studying computer science (or programming in general).
Get A Computer You’ll Love
I got a 13-inch MacBook Pro my freshmen year of college, and it was the best decision I made in my academic career. At the beginning of my senior year, I got a 15-inch MacBook Pro with an LG UltraFine 5K Display, it was the second best decision I made in my academic career.
Getting a great computer is not just to speed up compile times; it’s to make programming more enjoyable. During a typical programming day, I’ll be on my computer 8-12 hours. 33%-50% of my my time that day will be on a computer. Reducing friction 33%-50% of the day is almost invaluable.
Although a MacBook was perfect for me, it might not necessarily be right for anyone else. Find a computer you love, you won’t regret it.
Learn LaTeX
LaTeX was probably my most used programming tool in my academic career. It was a Swiss Army knife for any form of documents I might need:
- Notes
- Presentations
- Assignments
- Papers
- Documentation
Some samples of my LaTeX work can be found under my project page.
Learn A Different Language
At Missouri S&T, C++ is the first (and during my time there, the only) language one learned. The problem with this? One language, whether it be the can-all that C++ is, is not always appropriate for the job. There were times where Python was a much better choice. Sometimes it was C. Sometimes it was Swift. It all depends on the project. My recommendation is if you learned Python, learn C++. If you learned C++, learn Python. From there, find languages that will be useful for your domain.
Learn Your Text Editor
I didn’t pick up Vim until I was a junior; this was a giant mistake on my part. Vim didn’t just make it easier to delete a particular line of code; Vim made it easier to open files quickly, explore codebases faster, modify text quicker, etc.
Vim is not for everyone. Vim might not be right for you. Find a good text editor, and learn everything you can about it. It’ll make the friction of actually writing code much less.
Take Higher-Level Classes Early
Nearly all Computer Science programs require one to take higher-level classes as electives; in our school, it was roughly five graduate level classes. I made the mistake of taking all five my last year.
Emphasis on mistake.
Take them early; chances are you won’t be bogged down with job finding, apartment hunting, or the likes. Taking 18 hours, with three very difficult classes, in one term is not something I would wish on anyone.
Find Interests In Computer Science
When you find topics that interest you in Computer Science, chances are you will like your major more. I found a particular interests in AI related courses, but it is not for everyone. Computer Science has a breadth of interesting topics; check to see what your school has to offer, like:
- Security
- AI
- Data Mining and Big Data
- Machine Learning
- Databases
- UI/UX Design
Find Interests Outside Computer Science
Burnout is a serious issue in the industry. It can be a serious issue inside school as well. Foster passions outside of the classroom. Maybe join a design team, fraternity, or a sports team. There will be plenty of time at work to learn new things.
In addition, take genuinely interesting elective (not the easy ones). I learned I had an interest in Physics, and I carry that with me everyday.
Intern As Much As Possible
Internships are not only a gentle introduction into the industry, but a great way to determine what job you will want outside of school. Internships are often joked as “A 12-week interview processes”, and this can be true. If you find a good company that you can do good work for, it’s a great way to secure a job upon graduation.
Have Fun
The most important lesson I learned in college was to have fun (in moderation). College is all about learning who you are, what you want out of life, how to interact with others, and how to simply have fun. Lessons you learn in college will get carried over after graduating. Why not use this time to make life-long friends, create great memories, and know who you really are?
In Closing
This guide is not comprehensive; but it’s something I wish I read when I was starting. There’s always room to grow, so figure out what works for you. Do what makes you happy.
If there’s anything I am missing, feel free to ask.
Ever wondered how sorting algorithms looked in Swift? I did too — so I implemented some. Below you can find the following:
- Bubble Sort
- Selection Sort
- Insertion Sort
- Merge Sort
- Quicksort
As a preface: some of the algorithms use a self-implemented swap function — it is as follows:
extension Array {
mutating func swap(i: Int, _ j: Int) {
(self[i], self[j]) = (self[j], self[i])
}
}
Bubble Sort
func bubbleSort(var unsorted: Array<Int>) -> Array<Int> {
for i in 0..<unsorted.count - 1 {
for j in 0..<unsorted.count - i - 1 {
if unsorted[j] > unsorted[j + 1] {
unsorted.swap(j, j + 1)
}
}
}
return unsorted
}
Selection Sort
func selectionSort(var unsorted: Array<Int>) -> Array<Int> {
for i in 0..<unsorted.count - 1 {
var minimumIndex = i
for j in i + 1 ..< unsorted.count {
if unsorted[j] < unsorted[minimumIndex] {
minimumIndex = j
}
unsorted.swap(minimumIndex, i)
}
}
return unsorted
}
Insertion Sort
func insertionSort(var unsorted: Array<Int>) -> Array<Int> {
for j in 1...unsorted.count - 1 {
let key = unsorted[j];
var i = j - 1
while i >= 0 && unsorted[i] > key {
unsorted[i + 1] = unsorted[i]
i--
}
unsorted[i + 1] = key
}
return unsorted
}
Merge Sort
func merge(leftArray left: Array<Int>, rightArray right: Array<Int>) -> Array<Int> {
var result = [Int]()
var leftIndex = 0, rightIndex = 0
// Calculated variable to determine if the indexes have reached end of their respective arrays
var indexesHaveReachedEndOfArrays: Bool { return leftIndex < left.count && rightIndex < right.count }
while indexesHaveReachedEndOfArrays {
if left[leftIndex] < right[rightIndex] {
result.append(left[leftIndex])
leftIndex++
}
else if left[leftIndex] > right[rightIndex] {
result.append(right[rightIndex])
rightIndex++
}
else {
result.append(left[leftIndex])
result.append(right[rightIndex])
leftIndex++
rightIndex++
}
}
result += Array(left[leftIndex..<left.count])
result += Array(right[rightIndex..<right.count])
return result
}
func mergesort(unsorted: Array<Int>) -> Array<Int> {
let size = unsorted.count
if size < 2 { return unsorted }
let split = Int(size / 2)
let left = Array(unsorted[0 ..< split])
let right = Array(unsorted[split ..< size])
let sorted = merge(leftArray: mergesort(left), rightArray: mergesort(right))
return sorted
}
Quick Sort
func quicksort(var unsorted: Array<Int>) -> Array<Int> {
quicksort(&unsorted, firstIndex: 0, lastIndex: unsorted.count - 1)
return unsorted
}
private func quicksort(inout unsorted: Array<Int>, firstIndex: Int, lastIndex: Int) -> Array<Int> {
if firstIndex < lastIndex {
let split = partition(&unsorted, firstIndex: firstIndex, lastIndex: lastIndex)
quicksort(&unsorted, firstIndex: firstIndex, lastIndex: split - 1)
quicksort(&unsorted, firstIndex: split + 1, lastIndex: lastIndex)
}
return unsorted
}
private func partition(inout unsorted: Array<Int>, firstIndex: Int, lastIndex: Int) -> Int {
let pivot = unsorted[lastIndex]
var wall = firstIndex
for i in firstIndex ... lastIndex - 1 {
if unsorted[i] <= pivot {
unsorted.swap(wall, i)
wall++
}
}
unsorted.swap(wall, lastIndex)
return wall
}
As I go throughout my undergraduate studies, I have gotten really good at digging deeper until I completely understand a subject; however, when I observed around me, I realized that I am in the minority. Often times, students only desired to know how to get from question to answer without any intermediate thinking. When trying to hypothesize an answer, I cannot think of a definitive one. However, I know one possible contribution: encapsulation[1].
The most famous example I came across is the derivative of any trigonometric function besides $\sin x$ or $\cos x$. As any student who’s taken Calculus can tell you, $$\frac{d}{dx} \left(\tan x\right) = \sec^2 x $$
However, most students couldn't tell you:
$$\begin{align*}
\frac{d}{dx} \left(\tan x\right) &= \frac{d}{dx} \left(\frac{\sin x}{\cos x}\right) \\
&= \frac{\cos x \cos x + \sin x \sin x}{\cos^2 x} \\
&= \frac{\cos^2 x + \sin^2 x}{\cos^2 x} \\
&= \frac{1}{\cos^2 x} \\
&= \sec^2 x
\end{align*}$$
At first glance, it's hardly useful information. As a matter of fact you might think of it as piece of trivia — until you see the amount of people that miss it on a quiz or test. And entry-level Calculus is not the only place this is relevant; physics, thermodynamics, mathematics of any kind, computer science, and the like.
A question you might be thinking, why does it matter? If it's worth a couple of points on a test, is it that harmful? Maybe not, until you realize you conceptually do not understand the subject — only mechanically.
The next time you're learning something, ask yourself: "why?". The outcome might delight you.
- The act of enclosing in a capsule; the growth of a membrane around (any part) so as to enclose it in a capsule. ↩︎
It’s not so often I come across a problem that works out so beautifully yet requires so many different aspects in competitive programming. One particular problem, courtesy of Kattis, really did impress me.
The problem was House of Cards, and the problem was this (skip for TL;DR at bottom):
Brian and Susan are old friends, and they always dare each other to do reckless things. Recently Brian had the audacity to take the bottom right exit out of their annual maze race, instead of the usual top left one. In order to trump this, Susan needs to think big. She will build a house of cards so big that, should it topple over, the entire country would be buried by cards. It’s going to be huge!
The house will have a triangular shape. The illustration to the right shows a house of height 6 and Figure 1 shows a schematic figure of a house of height 5.

For aesthetic reasons, the cards used to build the tower should feature each of the four suits (clubs, diamonds, hearts, spades) equally often. Depending on the height of the tower, this may or may not be possible. Given a lower bound $h_0$ on the height of the tower, what is the smallest possible height $h \geq h_0$ such that it is possible to build the tower?
TL;DR: Using Figure 1 as a reference, you are given a lower bound on a height for a tower of cards. However, there must be an equal distribution of all four suites; clubs, diamonds, hearts, and spades.
This implies that the number of cards have to be divisible by $4$. Seeing as the input was huge $1 \leq h_0 \leq 10^{1000}$, there was no brute forcing this. So, first thought: turn this into a closed-form series, and solve the series.
Getting the values for the first five heights, I got the following set:
$${2, 7, 15, 25, 40, \ldots}$$
I was able to turn this set into a series quite easily:
$$\sum _{n = 1} ^{h_0} \left(3n - 1\right)$$
This turned into the following equation:
$$\frac{1}{2} h_0(3,h_0 + 1)$$
So, all I had to do was plug $h_0$ in the equation, and increment while the number was not divisible by $4$. Then, I realized how large the input really was. The input size ($1*10^{1000}$) was orders of magnitudes larger than typical, large data types would allow ($1.84 * 10^{19}$).
I realized this couldn’t be tested against a intensive data set, because there is only one number to calculate. I thought, since the series always subtracts one, the minimum times I must increment should roughly be four. Keeping this in mind, I decided to use Python. Python can work with arbitrarily large numbers, making it ideal in this situation.
I sat down, hoped for the best, and wrote the following code.
def getNumberOfCards(x):
return (3*pow(x, 2) + x) // 2
height = int(input())
while getNumberOfCards(height) % 4 != 0:
height += 1
print(height)
It worked.
I avoid the Mac App store as much as possible. It’s buggy. It’s slow. Apps are getting pulled from it left and right. But, I could not avoid it permanently because of software updates. As macOS Sierra 10.5 rolled out, I clicked on the App Store, followed by updates, and...
I could not get the updates page opened. Killed App Store. Restart computer. Everything.
After doing some research, I discovered there is a way to update software from the terminal: softwareupdate
. So, after running one command (sudo softwareupdate -iv
), I am writing this for the latest version of macOS Sierra.