Advent of Code Day 1: A Deep Dive into Language Characteristics
Solving Advent Of Code 2024 Day 1, using Elixir, Rust, Go and Haskell to how each language's philosophy and features guide us toward different solutions, even for the same simple problem.
Every year when December rolls around, I tell myself "This year, I'm going to solve Advent of Code!" And every year, I end up sticking to my comfort zone (hello, procrastination, my old friend). But 2024 feels different. Instead of just picking one new language, I thought "Why not go all in?"
So here I am, tackling the problem in four languages I've been working with over the past few years: Elixir, Rust, Go, and Haskell. A quick disclaimer though - I'm not here to showcase the most performant implementations or find the most elegant solutions. Instead, this is a learning journey about how different languages influence our problem-solving approach. It's fascinating to see how each language's philosophy and features guide us toward different solutions, even for the same simple problem.
Let's dive into Day 1 and see how these languages make us approach problem-solving in their own unique ways.
The Problem at Hand
The challenge seems simple at first: we're given pairs of numbers, one pair per line. We need to sort each column of numbers independently and then sum up the absolute differences between corresponding pairs. For example, given:
3 4
4 3
2 5
We first sort each column to get [2,3,4]
and [3,4,5]
, then sum their differences: |2-3| + |3-4| + |4-5| = 3
. Simple enough on paper, but as we'll see, each language brings its own perspective on how to handle this data transformation.
Elixir: Where Pattern Matching Shines
Let's start with Elixir. Pattern matching is often described as one of Elixir's awesome features, but its real power shows up in unexpected ways when we're handling data transformations:
defmodule Solution do
def parse(input) do
input
|> String.trim()
|> String.split("\n")
|> Enum.map(&parse_line/1)
|> Enum.unzip()
end
defp parse_line(line) do
line
|> String.split()
|> Enum.map(&String.to_integer/1)
|> then(fn [a, b] -> {a, b} end)
end
def solve({left, right}) do
[left, right]
|> Enum.map(&Enum.sort/1)
|> then(&calculate_distance/1)
end
defp calculate_distance([sorted_left, sorted_right]) do
Enum.zip_with(sorted_left, sorted_right, &abs(&1 - &2))
|> Enum.sum()
end
end
The beauty of Elixir's approach reveals itself in the details. Take the pattern matching in parse_line/1
- it's not just syntactic sugar for destructuring data. It's a declarative way of saying "this function only makes sense for pairs of numbers." If someone later modifies the input file to have three numbers per line, they'll get an immediate, clear failure message rather than silent incorrect behavior.
The then/1
function showcases another subtle but powerful aspect of Elixir's design. In many functional languages, you'd need to break your pipeline when the data doesn't fit perfectly. Before Elixir 1.12, we'd write something like:
|> (fn data -> calculate_distance(data) end).()
Now with then/1
, the code reads like natural transformation steps, making it easier to understand the data flow at a glance.
Rust: When Safety Meets Performance
Rust's implementation reveals something fascinating about the relationship between ownership and optimization:
use std::error::Error;
use std::fs::File;
use std::io::{BufRead, BufReader};
#[derive(Debug)]
struct Lists {
left: Vec<i32>,
right: Vec<i32>,
}
impl Lists {
fn from_file(path: &str) -> Result<Self, Box<dyn Error>> {
let file = File::open(path)?;
let reader = BufReader::new(file);
let mut left = Vec::new();
let mut right = Vec::new();
for line in reader.lines() {
let line = line?;
let nums: Vec<i32> = line
.split_whitespace()
.map(str::parse)
.collect::<Result<_, _>>()?;
if nums.len() != 2 {
return Err("Each line must contain exactly two numbers".into());
}
left.push(nums[0]);
right.push(nums[1]);
}
Ok(Lists { left, right })
}
fn solve(mut self) -> i32 {
self.left.sort_unstable();
self.right.sort_unstable();
self.left.iter()
.zip(self.right.iter())
.map(|(a, b)| (a - b).abs())
.sum()
}
}
Look at the solve
method's signature. By taking self
by value instead of reference, we're not just moving data around - we're enabling optimizations. Since Rust knows we own the data and won't need the original order again, it can use sort_unstable()
, which is significantly faster than stable sorting but would be risky to use if other code might depend on the original order.
The error handling through the ?
operator demonstrates Rust's philosophy of "zero-cost abstractions." While it looks like simple syntax sugar, it compiles down to the same efficient code you'd write by hand with explicit error checking. You get the ergonomics of exceptions with the performance of manual error handling.
Go: Simplicity as a Feature
Go takes a different approach that initially might seem verbose:
package main
import (
"bufio"
"fmt"
"os"
"sort"
"strconv"
"strings"
)
type Lists struct {
Left []int
Right []int
}
func readLists(path string) (Lists, error) {
file, err := os.Open(path)
if err != nil {
return Lists{}, fmt.Errorf("opening file: %w", err)
}
defer file.Close()
var lists Lists
scanner := bufio.NewScanner(file)
for scanner.Scan() {
nums := strings.Fields(scanner.Text())
if len(nums) != 2 {
return Lists{}, fmt.Errorf("expected 2 numbers, got %d", len(nums))
}
left, err := strconv.Atoi(nums[0])
if err != nil {
return Lists{}, fmt.Errorf("parsing left number: %w", err)
}
right, err := strconv.Atoi(nums[1])
if err != nil {
return Lists{}, fmt.Errorf("parsing right number: %w", err)
}
lists.Left = append(lists.Left, left)
lists.Right = append(lists.Right, right)
}
return lists, scanner.Err()
}
func solve(lists Lists) int {
left := make([]int, len(lists.Left))
right := make([]int, len(lists.Right))
copy(left, lists.Left)
copy(right, lists.Right)
sort.Ints(left)
sort.Ints(right)
total := 0
for i := range left {
total += abs(left[i] - right[i])
}
return total
}
func abs(x int) int {
if x < 0 {
return -x
}
return x
}
The explicit error handling in Go is often criticized as repetitive, but it serves a crucial purpose in large codebases. Every error check is a decision point where you're forced to think about what could go wrong. This becomes invaluable when you're maintaining code months later and need to understand all the possible failure modes.
Go's implementation also shows the value of predictability over cleverness. We could use interfaces or generics to make the code more "elegant," but Go pushes us toward simple, straightforward solutions that any team member can understand at a glance. It's a reminder that code is read far more often than it's written.
Haskell: Types as Documentation
Haskell brings a unique perspective where the type system becomes a powerful documentation tool:
{-# LANGUAGE RecordWildCards #-}
{-# LANGUAGE OverloadedStrings #-}
import Data.List (sort)
import qualified Data.Text as T
import qualified Data.Text.IO as TIO
data Lists = Lists
{ leftList :: [Int]
, rightList :: [Int]
} deriving Show
parseLine :: T.Text -> Either String (Int, Int)
parseLine line = case map readInt $ T.words line of
[Right x, Right y] -> Right (x, y)
_ -> Left "Each line must contain exactly two numbers"
where
readInt = either Left Right . T.decimal
parseFile :: FilePath -> IO (Either String Lists)
parseFile path = do
content <- TIO.readFile path
return $ do
pairs <- traverse parseLine $ T.lines content
let (lefts, rights) = unzip pairs
return Lists { leftList = lefts, rightList = rights }
solve :: Lists -> Int
solve Lists{..} = sum $ zipWith distance (sort leftList) (sort rightList)
where distance x y = abs (x - y)
The type signatures in Haskell tell a complete story. When you see:
parseLine :: T.Text -> Either String (Int, Int)
You immediately know three things:
- This function might fail (
Either
) - If it fails, you'll get a
String
explaining why - If it succeeds, you'll get exactly two integers
The use of traverse
here isn't just about handling errors - it's about composing effects. In other languages, we'd write nested loops or use intermediate collections to handle the possibility of failure at each step. Haskell lets us express this as a single transformation: "try to parse each line, collecting all successes or stopping at the first failure." The type system ensures we can't accidentally ignore a failure case.
Closing Thoughts
What makes this comparison fascinating isn't just how each language solves the problem, but how each solution reveals the core values of its language:
- Elixir shows us how pattern matching and data transformation can make code both safe and readable
- Rust demonstrates how we can write high-level abstractions without sacrificing performance
- Go reminds us that sometimes the simplest solution is the best solution
- Haskell shows us how a powerful type system can make code self-documenting
These aren't just different ways to solve the same problem - they're different ways of thinking about programming itself. And that's what makes polyglot programming so valuable - it expands our mental models and makes us better programmers, regardless of which language we're using.