Illya Starikov
GDL-60
Streamline your flight prep, and enjoy your aircraft more with the GDL 60 datalink and PlaneSync™ technology.

The Garmin GDL-60 is a small panel-mounted datalink module that brings wireless connectivity to an aircraft’s avionics suite. It is part of Garmin’s PlaneSync “connected cockpit” system and works with Garmin avionics (such as GTN™ Xi navigators and G1000®/G1000 NXi flight decks) to connect the aircraft to the outside world. In practice the GDL-60 acts like an onboard “internet box,” using both a built-in 4G LTE cellular modem and Wi‑Fi radio to link the plane’s systems with Garmin’s cloud services and pilot tablets. This lets the avionics exchange data automatically – for example, it can download chart and database updates or stream weather and traffic directly into mobile apps – without a pilot having to plug in data cards.
Technically, the GDL-60 is a compact device (about 8.72″ x 4.00″ x 1.12″ and 1.5 lb) that mounts in the instrument panel. It runs on 14/28 V DC aircraft power and is certified for high altitudes up to 55,000 ft. Its hardware includes dual Wi-Fi radios (one access point, one client) using 802.11 b/g/n at 2.4 GHz, plus a multi-band LTE modem covering global 4G frequencies. In effect, the GDL-60 creates both an onboard Wi‑Fi network and a cellular data link. The aircraft can thus stay connected on the ground or in flight, subject to cell coverage. Garmin’s documentation notes that the unit “enables PlaneSync technology” via these connections and that its LTE/Wi-Fi link automatically keeps the avionics databases updated.
Once installed, the GDL-60 offers several key features for pilots. One major advantage is automated database updates: Garmin explains that the GDL-60 “offers database downloads directly to the aircraft,” so that new charts and navigation data can download over the air without a pilot present. The system then “auto-synchronize[s] across your avionics at power up,” eliminating manual update chores. The GDL-60 also logs flight and engine data; after landing it can “transmit flight and engine log data automatically… to secure cloud storage,” where pilots or maintenance crews can review it via the Garmin Pilot mobile app or the flyGarmin web portal. Another useful feature is remote aircraft monitoring: with the GDL-60 and Garmin Pilot, an owner on the ground can check the airplane’s status from anywhere. The Pilot app can show things like Hobbs and tach times, fuel quantity, battery voltage, outside-air temperature, oil temperature, and even the last known GPS location of the aircraft. In short, GDL-60 keeps pilots informed about their aircraft’s health and readiness.
Perhaps the most visible benefit is mobile connectivity. The GDL-60 enables Garmin’s Connext system, which streams in-flight data from the panel to pilot devices. For example, it can send ADS-B weather (FIS-B) and traffic, GPS position, and even attitude (AHRS) information to apps like Garmin Pilot or ForeFlight. In fact, Garmin describes that the GDL-60 lets you “stream weather, traffic, AHRS, flight plans and other data from your avionics to flight apps such as Garmin Pilot and ForeFlight”. Pilots can also transfer flight plans between their tablet and the panel through this link. (With additional hardware, the GDL-60 can even extend to cabin entertainment and SATCOM: for example, pairing it with a SiriusXM receiver enables XM audio tuning via the Pilot app, and it can interface with a Garmin GSR-56 for in-flight texting/voice calls.) Overall, the Garmin GDL-60 turns a compatible cockpit into a connected cockpit – keeping nav databases current and putting real-time flight and weather data into pilots’ hands through familiar mobile interfaces.


Pixel Tablet
Meet the new Google Pixel Tablet that’s helpful in your hands and at home.

The Google Pixel Tablet is a home-focused Android tablet released in 2023. It features a 10.95-inch LCD touchscreen display (2560 × 1600 resolution, 16:10 aspect ratio) with around 500 nits of brightness. The tablet’s build uses an aluminum frame with rounded edges and a nano-ceramic coating inspired by porcelain, which gives it a soft matte texture for better grip. Google offers the Pixel Tablet in three colors – Porcelain (off-white), Hazel (gray-green), and Rose (pink). The device also houses four built-in speakers (two on each side) for stereo audio output during media playback.
Under the hood, the Pixel Tablet is powered by Google’s Tensor G2 chipset (the same processor used in the Pixel 7 series smartphones). It comes with 8 GB of RAM and either 128 GB or 256 GB of internal UFS 3.1 storage (non-expandable). The tablet ships with Android 13 and includes Google’s Titan M2 security chip, with Google promising at least five years of security updates for the device. Power is supplied by a built-in 27 Wh battery (about 7020 mAh) that is rated for up to 12 hours of video streaming on a full charge. Connectivity is Wi‑Fi 6 and Bluetooth 5.2 only (the Pixel Tablet does not offer cellular/LTE support). For cameras, it has an 8 MP front camera and an 8 MP rear camera, both with fixed focus and primarily intended for video calls or simple photos. A fingerprint scanner is integrated into the power button for secure unlocking.
Google has optimized the Pixel Tablet’s software for a large screen experience. Many popular apps (such as YouTube, Spotify, and Disney+) have improved layouts on the tablet’s display, and a Google TV app comes preloaded to make it easy to browse and stream video content on the big screen. The tablet supports multitasking features like split-screen mode, allowing two apps to run side by side, and it offers accurate voice typing and other Google AI features for convenient hands-free input. It is also compatible with USI 2.0 stylus pens, enabling users to draw or take notes with a supported active stylus (no stylus is included in the box). Notably, the Pixel Tablet supports multiple user profiles – each family member can have their own account with separate apps, settings, and personalizations on the device. This multi-user support underlines its role as a shared home tablet.
One of the Pixel Tablet’s signature features is its Charging Speaker Dock, which comes included with the tablet. The tablet can be placed on this dock via pogo pin connectors (held in place magnetically), which both charges the device and boosts its audio. The dock contains a full-range speaker that provides significantly more bass and room-filling sound compared to the tablet’s built-in speakers. When the Pixel Tablet is docked, it automatically enters Hub Mode, functioning like a smart display for the home. In this mode, the tablet can show a hands-free Google Assistant interface, smart home controls, and even a digital photo frame or clock on the screen. Users can control smart devices (lights, thermostats, doorbells, etc.) directly from the tablet’s hub interface, similar to a Google Nest Hub. Additionally, the Pixel Tablet is the first tablet with Chromecast built-in, allowing you to cast music or videos from your phone to the tablet’s screen when it’s on the dock. This dual functionality lets the Pixel Tablet serve as both a personal tablet and a communal smart home display depending on how it’s being used.







Pixel Buds Pro (I & II)
Loud and clear, Pixel Buds Pro are here.

Google’s Pixel Buds Pro (released mid-2022) are compact true wireless earbuds with a smooth, stemless design. Unlike earlier Pixel Buds models, the Pro version drops the “stabilizer arc” wing-tips for a simpler in-ear fit, improving comfort. Each earbud comes with silicone tips in multiple sizes to ensure a snug seal. The oval charging case (available in four colors) has a pebble-like shape similar to the Pixel Buds A-Series case (about 10–15% larger). Both the earbuds and case are water-resistant (buds IPX4, case IPX2), making them suitable for workouts and light rain.
The Pixel Buds Pro include:
- Active Noise Cancellation (ANC) and a transparency (ambient sound) mode, letting you either block out or hear surrounding noise as needed.
- Hands-free Google Assistant for voice commands (play/pause, volume, etc.) and even on-device translation support in conversation mode.
- Multipoint Bluetooth pairing (connect to two devices at once) and Google Fast Pair for quick setup between phones, tablets, and laptops.
- Water/sweat resistance (earbuds rated IPX4) for use during exercise or in light rain.
The Pixel Buds Pro deliver strong audio and reliable performance. Reviewers note clear, well-balanced sound and effective noise canceling. They also feature an in-ear detection sensor that automatically pauses audio when you remove an earbud. Battery life is rated up to about 7 hours per charge with ANC on (11 hours with it off), and the compact case provides roughly two full extra charges (around 30 total hours). The case supports both USB-C wired charging and Qi wireless charging. Connectivity is much improved over earlier Pixel Buds: the earbuds use Bluetooth 5.0 with Google’s Fast Pair and show virtually no dropouts in testing. They also support automatic device switching (multipoint pairing) between two connected devices.














Nest Indoor Camera
Nest Cam tells you more about what’s going on inside.

Google’s Nest Indoor Camera is a small home security camera designed for indoor use. It has a compact, rounded body and comes in neutral colors so it can blend into home decor. The camera can sit on a shelf or table using its built-in stand, or be mounted on a wall with the included bracket and screws. It connects to your home Wi-Fi network and streams live video to your smartphone or tablet through the Google Home app.
The camera records full high-definition (1080p) video and uses a wide-angle lens (about 135 degrees) to capture a broad area. It also uses high-dynamic-range (HDR) imaging to improve detail in bright and dark parts of the scene, and it has infrared night vision to see clearly even in low light. A built-in microphone and speaker allow two-way communication through the app, so you can listen to and speak with people or pets near the camera. Together, these video and audio features let you keep an eye on a room and even interact with anyone there while you’re away.
One of the camera’s key smart features is motion detection. If the camera detects movement, it automatically sends an alert to your phone. You can then open the live video feed in the Google Home app at any time to check on what’s happening. The camera also has a small LED light (usually green) that turns on when it is recording or streaming. This visual indicator lets people nearby know the camera is active and helps address privacy concerns.
Setup is straightforward and user-friendly. You plug the camera into a standard wall outlet using the supplied USB cable, then open the Google Home app on your phone to complete the setup. The app guides you through connecting the camera to your Wi-Fi network and naming the device, so most people can have it running in just a few minutes. This simple process means advanced technical skills are not needed to start using the camera.
As a product, the Nest Indoor Camera adds to Google’s Nest line of smart home security devices. It expands Google’s offerings for indoor monitoring and reinforces the company’s presence in the home security market. Overall reception has been positive, with many users praising its reliable performance, easy setup, and clean, modern design. Many also appreciate its smooth integration with Google’s smart home ecosystem, making it a convenient choice for monitoring a home.

Update: Previously, a system was introduced for detecting if an individual was stuck at a local optimum. After extensive testing, this system was shown to be fragile. This post has been updated to showcase a more robust system.
Previously our Evolutionary Algorithms had it pretty easy: there would be either one local optimum (like our Secret Message problem instance) or multiple, valid local optimums (like the 3-SAT problem instance). In the real world, we might not be so lucky.
Often, an Evolutionary Algorithm might encounter a local optimum within the search space, and it will not be so easy to escape — offspring generated will be in close proximity of the optimum, and the mutation will not be enough to start exploring other parts of the search space.
To add to the frustration, there might not enough time or patience to wait for the Evolutionary Algorithm to finish. We might have different criteria we are looking for, outside of just a fitness target.
We are going to tackle both of these issues.
Applying Termination Conditions
First, we will examine what criteria we want met before our Evolutionary Algorithm terminates. In general, there are six that are universal:
- Date and Time. After a specified date and time, terminate.
- Fitness Target. This is what we had before; terminate when any individual attains a certain fitness.
- Number of Fitness Evaluations. Every generation, every individual's fitness is evaluated (in our case, every generation $\mu + \lambda$ fitnesses are evaluated). Terminate after a specified number of fitness evaluations.
- Number of Generations. Just like the number of fitness evaluations, terminate after a specified generations.
- No Change In Average Fitness. This is a bit tricky. After specify $N$ generations, we check every $N$ generations back to determine if the average fitness of a population has improved. We have to be careful in our programming; by preserving diversity, we almost always lose fitness.
- No Change In Best Fitness. Just like No Change In Average Fitness, but instead of taking the average fitness, we take the best.
Later, we will see how Conditions 5 & 6 will come in handy to determining if we are stuck in a local optimum.
To make sure we are always given valid termination conditions, we will have a super class that all termination conditions will inherit from. From there, we will have a separate condition for each of the listed conditions above.
class _TerminationCondition:
pass
class FitnessTarget(_TerminationCondition):
"""Terminate after an individual reaches a particular fitness."""
class DateTarget(_TerminationCondition):
"""Terminate after a particular date and time."""
class NoChangeInAverageFitness(_TerminationCondition):
"""Terminate after a there has been no change in the average fitness for a period of time."""
class NoChangeInBestFitness(_TerminationCondition):
"""Terminate after a there has been no change in the best fitness for a period of time."""
class NumberOfFitnessEvaluations(_TerminationCondition):
"""Terminate after a particular number of fitness evaluations."""
class NumberOfGenerations(_TerminationCondition):
"""Terminate after a particular number of generations."""
Now, we need something that will keep track of all these conditions, and tells us when we should terminate. And here's where we need to be careful.
First, we need to know when to terminate. We want to mix and match different conditions, depending on the use case. This begs the questions:
Should the Evolutionary Algorithm terminate when one condition has been met, or all of them?
Generally, it makes more sense to terminate when any of the conditions have been met, as apposed to all of them. Suppose the two termination conditions are date and target fitness. It does not make sense to keep going after the target fitness is reached, and (if in a time crunch) it does not make sense to keep going after a specified date.
Second, how should we define no change in average/best fitness? These values can be quite sinusoidal, so we want to be more conservative in our definition. One plausible solution is to take the average of the first quartile (the first 25% to ever enter the queue), and see if the there is a single individual with a better fitness in the second, third, or fourth quartile (the last 75% percent to enter the queue). This way, even if there were very dominant individuals in the beginning, a single, more dominant individual will continue the Evolutionary Algorithm.
From this, we have everything we might need to keep track of our terminating conditions.
class TerminationManager:
def __init__(self, termination_conditions, fitness_selector):
assert isinstance(termination_conditions, list)
assert all(issubclass(type(condition), _TerminationCondition) for condition in termination_conditions), "Termination condition is not valid"
self.termination_conditions = termination_conditions
self.__fitness_selector = fitness_selector
self.__best_fitnesses = []
self.__average_fitnesses = []
self.__number_of_fitness_evaluations = 0
self.__number_of_generations = 0
def should_terminate(self):
for condition in self.termination_conditions:
if isinstance(condition, FitnessTarget) and self.__fitness_should_terminate():
return True
elif isinstance(condition, DateTarget) and self.__date_should_terminate():
return True
elif isinstance(condition, NoChangeInAverageFitness) and self.__average_fitness_should_terminate():
return True
elif isinstance(condition, NoChangeInBestFitness) and self.__best_fitness_should_terminate():
return True
elif isinstance(condition, NumberOfFitnessEvaluations) and self.__fitness_evaluations_should_terminate():
return True
elif isinstance(condition, NumberOfGenerations) and self.__generations_should_terminate():
return True
return False
def reset(self):
"""Reset the best fitnesses, average fitnesses, number of generations, and number of fitness evaluations."""
self.__best_fitnesses = []
self.__average_fitnesses = []
self.__number_of_fitness_evaluations = 0
self.__number_of_generations = 0
def __fitness_should_terminate(self):
"""Determine if should terminate based on the max fitness."""
def __date_should_terminate(self):
"""Determine if should terminate based on the date."""
def __average_fitness_should_terminate(self):
"""Determine if should terminate based on the average fitness for the last N generations."""
def __best_fitness_should_terminate(self):
"""Determine if should terminate based on the average fitness for the last N generations."""
def __fitness_evaluations_should_terminate(self):
"""Determine if should terminate based on the number of fitness evaluations."""
def __generations_should_terminate(self):
"""Determine if should terminate based on the number of generations."""
And the changes to our Evolutionary Algorithm are minimal, too.
class EA:
...
def search(self, termination_conditions):
generation = 1
self.population = Population(self.μ, self.λ)
fitness_getter = lambda: [individual.fitness for individual in self.population.individuals] # noqa
termination_manager = TerminationManager(termination_conditions, fitness_getter)
while not termination_manager.should_terminate():
offspring = Population.generate_offspring(self.population)
self.population.individuals += offspring.individuals
self.population = Population.survival_selection(self.population)
print("Generation #{}: {}".format(generation, self.population.fittest.fitness))
generation += 1
print("Result: {}".format(self.population.fittest.genotype))
return self.population.fittest
However, we can still do better.
Generations Into Epochs
Before, the Evolutionary Algorithm framework we put in place was strictly a generational model. One generation lead to the next, and there were no discontinuities. Now, let's make our generational model into an epochal one.
We define an epoch as anytime our Evolutionary Algorithm encounters a local optimum. Once the end of an epoch is reached, the EA is reset, and the previous epoch is saved. Upon approaching the end of the next epoch, reintroduce the last epoch into the population; by this, more of the search space is covered.
How can we determine if we are at a local optimum?
We can't.
That does not mean we cannot have a heuristic for it. When there is little to no change in average/best fitness for a prolonged period of time, that typically means a local optimum has been reached. How long is a prolonged period of time? That's undetermined; it is another parameter we have to account for.
Note, if the Evolutionary Algorithm keeps producing more fit individuals, but the average fitness remains the same, the algorithm will terminate. Likewise, if the best fitness remains the same, but the average fitness closely approaches the best, the EA will terminate. Therefore, we should determine if the best fitness and the average fitness has not changed; only then should we start a new epoch.
Luckily, we already have something that will manage the average/best fitness for us.
class EA:
...
def search(self, termination_conditions):
epochs, generation, total_generations = 1, 1, 1
self.population = Population(self.μ, self.λ)
previous_epoch = []
fitness_getter = lambda: [individual.fitness for individual in self.population.individuals] # noqa
termination_manager = TerminationManager(termination_conditions, fitness_getter)
epoch_manager_best_fitness = TerminationManager([NoChangeInBestFitness(250)], fitness_getter)
epoch_manager_average_fitness = TerminationManager([NoChangeInAverageFitness(250)], fitness_getter)
while not termination_manager.should_terminate():
if epoch_manager_best_fitness.should_terminate() and epoch_manager_average_fitness.should_terminate():
if len(previous_epoch) > 0:
epoch_manager_best_fitness.reset()
epoch_manager_average_fitness.reset()
self.population.individuals += previous_epoch
previous_epoch = []
else:
epoch_manager_best_fitness.reset()
epoch_manager_average_fitness.reset()
previous_epoch = self.population.individuals
self.population = Population(self.μ, self.λ)
generation = 0
epochs += 1
self.population = Population.survival_selection(self.population)
offspring = Population.generate_offspring(self.population)
self.population.individuals += offspring.individuals
self.__log(total_generations, epochs, generation)
total_generations += 1
generation += 1
print("Result: {}".format(self.population.fittest.genotype))
return self.population.fittest
def __log(self, total_generations, epochs, generation):
"""Log the process of the Evolutionary Algorithm."""
...
Although considerably more complicated, this new Evolutionary Algorithm framework allows us to explore much more of a search space (without getting stuck).
Let's put it to the test.
A New 3-SAT Problem
We're going to take on a substantially harder 3-SAT instance: 1,000 clauses, 250 variables. To make it worse, the number of valid solutions is also lower. We will also include the following terminating conditions:
- Time of eight hours.
- Fitness of all clauses satisfied (100).
- A million generations.
So, how does our Evolutionary Algorithm fair?
Not well. After twenty epochs, and thousands of generations — we do not find a solution. Fear not; in subsequent posts, we will work on optimizing our Genetic Algorithm to handle much larger cases, more effectively.
We have been introduced to recombination operators before; however, that was merely an introduction. There are dozens of different Evolutionary Algorithm recombination operators for any established genotype; some are simple, some are complicated.
For a genotype representation that is a permutation (such as a vector[1], bit-string, or hash-map[2]), we have seen a possible recombination operator. Our 3-SAT solver uses a very popular recombination technique: uniform crossover.
Furthermore, we know a permutation is not the only, valid genotype for an individual: other possibilities can include an integer or a real-valued number.
Note, for simplicity, we will discuss recombination to form one offspring. This exact process can be applied to form a second child (generally with the parent's role reversed). Recombination can also be applied to more than two parents (depending on the operator). Again, for simplicity, we choose to omit it[3].
First, let us start with permutations.
Permutation Crossover
In regard to premutation crossover, there are three, common operators:
- Uniform Crossover
- $N$ -Point Crossover
- Davis Crossover
Uniform crossover we have seen before. We consider individual elements in the permutation, and choose one with a random, equal probability. For large enough genotypes, the offspring genotype should consist of 50% of the genotype from parent one, and 50% of the genotype from parent two.

$N$-Point crossover considers segments of a genotype, apposed to individual elements. This operator splits the genotype of Parent 1 and Parent 2 $N$ times (hence the name $N$-point), and creates a genotype by alternating segments from the two parents. For every $N$, there will be $N + 1$ segments. For 1-point crossover, the genotype should be split into two segments, and the offspring genotype should be composed of one segment from Parent 1, and one segment from Parent 2. For 2-point crossover, there will be three segments, and the offspring genotype will have two parts from Parent 1 and one part from Parent 2 (or two parts, Parent 1, one part, Parent 2).

Davis Crossover tries to preserve the ordering of the genotype in the offspring (apposed to the previous methods, where ordering was not considered). The premise is a bit complicated, but bear with me. Pick two random indices ($k_1$ and $k_2$), and copy the genetic material of Parent 1 from $k_1$ to $k_2$ into the offspring at $k_1$ to $k_2$. Put Parent 1 to the side, his role is finished. Start copying the genotype of Parent 2 starting at $k_1$ to $k_2$ at the beginning of the offspring. When $k_2$ is reached in the parent, start copying the beginning of Parent 2 into the genotype, and when $k_1$ is reached in the parent, skip to $k_2$. When $k_1$ is reached in the offspring, skip to $k_2$, and start copying until the end. If this seems a complicated (it very much is), reference the accompanying figure.

Those are considered the three, most popular choices for permutations. Now, let us look at integer crossover.
Integer Crossover
Integer crossover is actually quite an interesting case; integers can be recombined as permutations or real-valued numbers.
An integer is already a permutation, just not at first glance: binary. The individual bits in a binary string are analogous to elements in a vector, and the whole collection is a vector. Now it is a valid permutation. We can apply uniform crossover, $N$-point crossover, or Davis Crossover, just as we have seen.
An integer is also already a real-valued number, so we can treat it as such. Let's take a look at how to recombine it.
Real-Valued Crossover
Real-Valued crossover is different than methods we have seen before. We could turn it into binary, but that would be a nightmare to deal with. However, we can exploit the arithmetic properties of real-valued numbers — with a weighted, arithmetic mean. For a child (of real value) $z$, we can generate it from Parent 1 $x$ and Parent 2 $y$ as such:
$$
z = \alpha \cdot x + (1 - \alpha) \cdot y
$$
Now, if we want to crossover a permutation of Parent 1 and Parent 2, we can do so for every element.
$$
z_i = \alpha \cdot x_i + (1 - \alpha) \cdot y_i
$$
This can be shown to have better performance than crossover methods discussed, but would entirely depend on use case.
Implementing Permutation Recombination
As always, we will now tackle implementing the permutation crossovers we've had before. None of them are incredibly complicated, except possibly $N$-point crossover.
class Individual
...
@staticmethod
def __uniform_crossover(parent_one, parent_two):
new_genotype = SAT(Individual.cnf_filename)
for variable in parent_one.genotype.variables:
gene = choice([parent_one.genotype[variable], parent_two.genotype[variable]])
new_genotype[variable] = gene
individual = Individual()
individual.genotype = new_genotype
return individual
@staticmethod
def __n_point_crossover(parent_one, parent_two, n):
new_genotype = SAT(Individual.cnf_filename)
variables = sorted(parent_one.genotype.variables)
splits = [(i * len(variables) // (n + 1)) for i in range(1, n + 2)]
i = 0
for index, split in enumerate(splits):
for variable_index in range(i, split):
gene = parent_one.genotype[variables[i]] if index % 2 == 0 else parent_two.genotype[variables[i]]
new_genotype[variables[i]] = gene
i += 1
individual = Individual()
individual.genotype = new_genotype
return individual
@staticmethod
def __davis_crossover(parent_one, parent_two):
new_genotype = SAT(Individual.cnf_filename)
variables = sorted(parent_one.genotype.variables)
split_one, split_two = sorted(sample(range(len(variables)), 2))
for variable in variables[:split_one]:
new_genotype[variable] = parent_two.genotype[variable]
for variable in variables[split_one:split_two]:
new_genotype[variable] = parent_one.genotype[variable]
for variable in variables[split_two:]:
new_genotype[variable] = parent_two.genotype[variable]
individual = Individual()
individual.genotype = new_genotype
return individual
Recombination In General
By no means is recombination easy. It took evolution hundreds of thousands of years to formulate ours. The particular permutation operator to use entirely dependent on the context of the problem; and most of the time, it is not obvious by any stretch. Sometimes, there might not even be an established crossover operator for a particular genotype.
Sometimes, you might have to get a little creative.
Steepest-Ascent Hill-Climbing
Search algorithms have a tendency to be complicated. Genetic algorithms have a lot of theory behind them. Adversarial algorithms[1] have to account for two, conflicting agents. Informed search relies heavily on heuristics. Well, there is one algorithm that is quite easy to grasp right off the bat.
Imagine you are at the bottom of the hill; you have no idea where to go. A decent place to start would be to go up the hill to survey the landscape. Then, restart to find a higher a peak until you find the highest peak, right? Well, that is the entire algorithm.
Let's dig a bit deeper.
An Introduction
What is Steepest-Ascent Hill-Climbing, formally? It's nothing more than an agent searching a search space, trying to find a local optimum. It does so by starting out at a random Node, and trying to go uphill at all times.
The pseudocode is rather simple:
current ← Generate-Initial-Node()
while true
neighbors ← Generate-All-Neighbors(current)
successor ← Highest-Valued-Node(neighbors)
if Value-At-Node(successor) <= Value-At-Node(current):
return current
current ← successor
What is this Value-At-Node
and $f$-value mentioned above? It's nothing more than a heuristic value that used as some measure of quality to a given node. Some examples of these are:
- Function Maximization: Use the value at the function $f(x, y, \ldots, z)$.
- Function Minimization: Same as before, but the reciprocal: $1 / f(x, y, \ldots, z)$.
- Path-Finding: Use the reciprocal of the Manhattan distance.
- Puzzle-Solving: Use some heuristic to determine how well/close the puzzle is solved.
The best part? If the problem instance can have a heuristic value associated with it, and be able to generate points within the search space, the problem is a candidate for Steepest-Ascent Hill-Climbing.
Implementing Steepest-Ascent Hill-Climbing
For this problem, we are going to solve an intuitive problem: function maximization. Given a function $z = f(x, y)$, for what values of $x, y$ will $z$ be the largest? To start, we are going to use a trivial function to maximize:
$$
z = -x^2 - y^2
$$
We see it is nothing more than a paraboloid. Furthermore, since it is infinite, we are going to restrict the domain to ${ x, y \in \mathbb{Z}^+ : -100 \leq x, y \leq 100 }$; therefore, we only have integer values between $(-100, 100)$.

So, let's begin.
The Representation
Because we will be searching throughout a search space, we will need some representation of a state. For our particular problem instance, it's very easy: the points $(x, y)$. Also, we will need to represent the $f$ value, so we create an auxiliary class as well.
class Node:
"""A node in a search space (similar to a point (x, y)."""
def __init__(self, x, y):
self.x = x
self.y = y
class Function:
"""A function and its respective bounds."""
def __init__(self, function, x_bounds, y_bounds):
...
def __call__(self, node):
...
@property
def x_bounds(self):
"""Get the x bounds of the function.
Returns:
tuple<int, int>: The x bounds of function in the format (min, max).
"""
...
@property
def y_bounds(self):
"""Get the y bounds of the function.
Returns:
tuple<int, int>: The y bounds of function in the format (min, max).
"""
...
That will be all that we need for our purposes.
Steepest-Ascent Hill-Climbing
As we saw before, there are only four moving pieces that our hill-climbing algorithm has: a way of determining the value at a node, an initial node generator, a neighbor generator, and a way of determining the highest valued neighbor.
Starting with the way of determining the value at a node, it's very intuitive: calculate the value $z = f(x, y)$.
class HillClimber:
"""A steepest-ascent hill-climbing algorithm."""
def __init__(self, function):
self.function = function
def _value_at_node(self, node):
return self.function(node)
The initial node can simply be taken as a random $(x, y)$ in their respective bounds.
def _initial_node(self):
x = randint(self.function.x_bounds[0], self.function.x_bounds[1])
y = randint(self.function.y_bounds[0], self.function.y_bounds[1])
return Node(x, y)
Generating neighbors is actually quite simple as well: because our domain is limited to integers, we can simply look at the four cardinal directions (and make sure we won't be breaking the bounds when we do). Also, we randomize the neighbors, to make things more interesting[2].
def _generate_all_neighbors(self, node):
x, y = node.x, node.y
nodes = [Node(x, y)]
if x < self.function.x_bounds[1]:
nodes.append(Node(x + 1, y))
if x > self.function.x_bounds[0]:
nodes.append(Node(x - 1, y))
if y < self.function.y_bounds[1]:
nodes.append(Node(x, y + 1))
if y > self.function.x_bounds[0]:
nodes.append(Node(x, y - 1))
shuffle(nodes)
return nodes
Finally, to get the highest value node, it's fairly straightforward:
def _highest_valued_node(self, neighbors):
max_point = neighbors[0]
for point in neighbors[1:]:
if self._value_at_node(point) > self._value_at_node(max_point):
max_point = point
return max_point
Piecing all this together, we get our Steepest-Ascent Hill-Climber:
def climb(self):
current_node = self._initial_node()
while True:
print("Exploring Node({}, {})".format(current_node.x, current_node.y))
neighbors = self._generate_all_neighbors(current_node)
successor = self._highest_valued_node(neighbors)
if self._value_at_node(successor) <= self._value_at_node(current_node):
return current_node
current_node = successor
Does it work? Exactly as planned.
Exploring Node(5, -88)
...
Exploring Node(5, -67)
...
Exploring Node(5, -47)
...
Exploring Node(5, -27)
...
Exploring Node(5, -4)
Exploring Node(4, -4)
Exploring Node(3, -4)
Exploring Node(3, -3)
Exploring Node(2, -3)
Exploring Node(2, -2)
Exploring Node(1, -2)
Exploring Node(1, -1)
Exploring Node(1, 0)
Exploring Node(0, 0)
However, this was too easy. We had a function with one local optimum. Let's make things interesting.
Optimizing Steepest-Ascent Hill-Climbing
Suppose we keep our previous domain, but we change our function to the following:
$$z = -(x^2 + y^2) + x\\, y\\, \cos x \sin y $$
This function isn't quite as intuitive to visualize, please reference the figure. Essentially, it’s what we had before, but thousands of local optimum when we get further from the center. Our previous Hill-Climbing would absolutely get destroyed by that function.

To alleviate this, we are going to use two optimizations:
- Instead of taking the steepest uphill move, we are going to simply take a random, uphill move (known as Stochastic Hill-Climbing).
- When we get stuck, we are going to restart the search (known as Hill-Climbing With Restarts).
Stochastic Hill-Climbing
Updating the algorithm is fairly simply, all the previous mechanics are inheritable, just swap out _highest_valued_node
with a stochastic version.
class StochasticHillClimber(HillClimber):
"""A stochastic steepest-ascent hill-climbing algorithm."""
def _get_random_uphill_move(self, current_node, neighbors):
uphill_nodes = []
for point in neighbors:
if self._value_at_node(point) > self._value_at_node(current_node):
uphill_nodes.append(point)
return current_node if len(uphill_nodes) == 0 else choice(uphill_nodes)
def climb(self):
current_node = self._initial_node()
while True:
print("Exploring Node({}, {})".format(current_node.x, current_node.y))
neighbors = self._generate_all_neighbors(current_node)
successor = self._get_random_uphill_move(current_node, neighbors)
if self._value_at_node(successor) <= self._value_at_node(current_node):
return current_node
current_node = successor
Running this algorithm, we get better results; but we can do better.
Stochastic Hill-Climbing With Restarts
For this, we simply have to restructure the climb
function to handle generational effects (like keeping the max valued node throughout generations). Not too difficult.
class StochasticHillClimberWithRestarts(StochasticHillClimber):
"""A stochastic steepest-ascent hill-climbing algorithm with restarts."""
def climb(self, number_of_generations):
max_node = self._initial_node()
for generations in range(number_of_generations):
current_node = self._initial_node()
while True:
print("Generation {}, Exploring Node({}, {}), Current Max Node({}, {})".format(generations, current_node.x, current_node.y, max_node.x, max_node.y))
neighbors = self._generate_all_neighbors(current_node)
successor = self._get_random_uphill_move(current_node, neighbors)
if self._value_at_node(max_node) < self._value_at_node(current_node):
max_node = current_node
if self._value_at_node(successor) <= self._value_at_node(current_node):
break
current_node = successor
return max_node
How did this one fare? Quite better than all the rest. Let's take a look at what the exploration process looked like.

Marvelous, some got to the top, many got caught in local optimum. A global-optimum was found. A success.
Source Code
The source code be found here.
- Algorithms used in games, where a player searches for an optimal move against an opponent. ↩︎
- If the neighbors are always generated deterministically, there might occur a sequence of ties when generating the highest-valued node. We randomize the neighbors so a random piece will be chosen in the tie-breaker. ↩︎
Let's propose an Evolutionary Algorithm experiment; say we already have a framework in place (like the Secret Message framework we previously implemented). How difficult would it be to completely switch problem instances?
First, we need another problem instance. Our previous problem instance was pretty straightforward: it had one local optimum. Let's take on a problem with many local optimum, such as the 3-SAT problem.
The premise of 3-SAT is simple. From a global pool of variables ($x_1$, $x_2$, $\ldots$, $x_n$), we have a basic clause of three variables or-ed together (signified by $\vee$):
$$
x_p \vee x_q \vee x_r
$$
Then, and (signified by a $\wedge$) several clauses together:
$$\left(x_p \vee x_q \vee x_r\right) \wedge \left(x_s \vee x_t \vee x_u\right) \wedge \ldots \wedge \left(x_v \vee x_w \vee x_y\right)$$
The only stipulation is that any variable can be negated (signified by a $\neg$). So, supposing we want to negate $x_p$; $x_s$ and $x_u$; and $x_v$, $x_w$, and $x_y$; we can do the following:
$$
\left(\neg x_p \vee x_q \vee x_r\right) \wedge \left(\neg x_s \vee x_t \vee \neg x_u\right) \wedge \ldots \wedge \left(\neg x_v \vee \neg x_w \vee \neg x_y\right)
$$
Now, we simply have to assign all the variables such that all the clauses will evaluate to true. It may sound simple, but it belongs to the hardest classes of problems in computer science. There is no guaranteed algorithm to produce the right answer at this time.
For a more visual approach, please reference the figure below. The goals is to make every inner node green, by having at lease one, connected, outer node be green. Note the green nodes have to account for negation as well.

This sounds like a good problem candidate for an Evolutionary Algorithms[1].
The SAT Problem
We can skip over the problem specific parts to worry more about the Evolutionary Algorithm aspect. Suppose we already have a well-defined SAT
class that takes care of SAT-specific properties and methods, like so:
class SAT:
def __init__(self, filename):
"""Create a SAT object that is read in from a CNF file."""
...
@property
def variables(self):
"""Get *all* the variables."""
...
@property
def total_clauses(self):
"""Set the total number of clauses."""
...
@property
def clauses_satisfied(self):
"""Get the number of satisfied clauses."""
...
def __getitem__(self, key):
"""Get a particular variable (key)"""
...
def __setitem__(self, key, value):
"""Set a variable (key) to value (True/False)"""
...
From this, we can create a new genotype for our Individual
.
The New Genotype
The genotype structure was very similar to what we had before:
- The genotype is the
SAT
problem we defined above. - Fitness is defined by a percentage of the total satisfied clauses.
- Mutation is uniform, choose a percentage $p$ of alleles and flip their value.
- Recombination is uniform, randomly assemble values from both parents.
Looking at the refactoring, not much has changed.

The New EA Framework
Now that we have updated our Individual, next thing to updated would be the Evolutionary Algorithm framework, including:
- The Population
- The EA Itself
Except, we don't have to.
That's the beauty of Evolutionary Algorithms, they are incredibly adaptable. By swapping out the Individual, the rest of the evolutionary algorithm should still work.
For our SAT problem, there were some parameters updated, to make the algorithm more efficient:
- The mutation rate has been reduced to 5%
- The tournament size has been reduced to 15 individuals (out of $\lambda = 100$).
The Result
So, let's try our Evolutionary Algorithm. Taking a SAT instance with 75 variables and 150 clauses, this makes the search space
$$
2^{75} \approx 3.77 \times 10^{22}
$$
Great, so roughly 1,000 times the grain of sand on Earth, easy. So, can our EA do it?
After roughly 100 iterations, yes. See the visualization below.

Marvelous, our EA managed to find a solution after only 100 iterations in a giant search space. And all we had to do was swap out one class.
The Source Code
All source code can be found here.
- In reality, it's not a great candidate for an evolutionary algorithm. The gradient is sometimes murky, because flipping one variable's value can drastically decrease/increase the fitness function. Also, there are several great heuristics for solving the SAT problem. ↩︎