All solutions are in this GitHub repository.

Given a template of polymers represented through letters and a set of rules, we must insert a new polymer between each pair of the initial template according to the rules. This insertion represents a single first step that produces the template for the next step. Part one requires running the insertion algorithm ten times, whereas part two is 40 times. Then, we must determine the least and most common polymer and subtract their occurrence number.

My initial solution was manipulating arrays of characters to insert new elements according to the set of given rules. However, this solution only worked for part one and was not optimized for exponentially growing arrays (sized 2^40) which eventually exceeded the limits (2^32). After attempting a recursive algorithm on each initial pair, I got a similar result.

The strategy that solved this challenge was to store pairs in an occurrence hash map. Each insertion step would determine the occurrence of new pairs based on the previous occurrence map. Finally, we can derive the polymer occurrence from the pair occurrence.

We can break down the solution as follows:

- read polymer template and insertion rules
- create a pair occurrence map for the template
- update pair occurrence map 10/40 times
- find polymer occurrence map
- subtract the least from the most common polymer

The template and pair insertion rules are separated by an empty line, thus we can the `split`

function with the `\n\n`

separator to get these two sections from the input.

`const [template, pairInsertionRulesData] = data.split('\n\n')`

We process each line from the input and add the corresponding rule to the `pairInsertionRules`

hash map. We use the pair as key and polymer to be inserted as value.

`const pairInsertionRules = {}pairInsertionRulesData.split('\n').forEach((line) => { const [pair, result] = line.split(' -> ') pairInsertionRules[pair] = result})}`

The initial template from the example is `NNCB`

and has the following pair occurrence hash map:

`{ 'NN': 1, 'NC': 1, 'CB': 1}`

We can create the pair occurrence map using the following algorithm:

`const getPairOccurenceMap = (template) => { const pairs = {} for (let i = 0; i < template.length - 1; ++i) { addToOccurenceMap(pairs, `${template.charAt(i)}${template.charAt(i + 1)}`) } return pairs}`

I extracted the logic for adding a key / value to an occurrence map into `addToOccurenceMap`

function. This is useful in some of the next steps.

`const addToOccurenceMap = (occurenceMap, key, value = 1) => { if (occurenceMap[key]) { occurenceMap[key] += value } else { occurenceMap[key] = value }}`

As mentioned earlier in this article, the strategy is to update the pair occurrence map rather than build a very large array or string exceeding the maximum limits. Thus, for part one we update 10 times and for part two 40 times.

`for (let i = 0; i < 10; ++i) { pairOccurenceMap = updatePairOccurenceMap(pairOccurenceMap, pairInsertionRules)}`

To build the new pair occurrence map, we iterate through each pair using the `Object.keys`

function. Using the pair as an index, we can determine the pair's occurrences from `pairOccurenceMap`

and the new polymer to insert from the `pairInsertionRules`

.

If a rule exists for the pair, then the new pair occurrence map will contain two new pairs rather than the original pair. For example, if the rule `NC -> B`

for the pair `NC`

exists, the new pair occurrence map will have pairs `NB`

and `CB`

instead of `NC`

. Otherwise, if a rule does not exist, we add the original pair to the new occurrence map.

To build the new pairs we can use the string template literals `${pair.charAt(0)}${polymer}`

for the left side pair and `${polymer}${pair.charAt(1)}`

for the right side pair.

We add the value representing the occurrences of the original pair to each new pair.

`const updatePairOccurenceMap = (pairOccurenceMap, pairInsertionRules) => { const newPairOccurenceMap = {} Object.keys(pairOccurenceMap).forEach((pair) => { const occurences = pairOccurenceMap[pair] const polymer = pairInsertionRules[pair] if (polymer) { addToOccurenceMap(newPairOccurenceMap, `${pair.charAt(0)}${polymer}`, occurences) addToOccurenceMap(newPairOccurenceMap, `${polymer}${pair.charAt(1)}`, occurences) } else { addToOccurenceMap(newPairOccurenceMap, pair, occurences) } }) return newPairOccurenceMap}`

We can determine the polymer occurrence map based on the pair occurrence map and the first and last letter of the initial template. Each polymer, except the first and last one in the initial template, is counted twice in pairs. Thus, we must add the first and last polymers one more time, then divide each polymer occurrence by 2 to determine the correct polymer occurrence.

`onst getPolymerOccurenceMap = (pairOccurenceMap, template) => { const occurenceMap = {} Object.keys(pairOccurenceMap).forEach(pair => { addToOccurenceMap(occurenceMap, pair.charAt(0), pairOccurenceMap[pair]) addToOccurenceMap(occurenceMap, pair.charAt(1), pairOccurenceMap[pair]) }) occurenceMap[template.charAt(0)]++ occurenceMap[template.charAt(template.length - 1)]++ Object.keys(occurenceMap).forEach(key => { occurenceMap[key] = Math.floor(occurenceMap[key] / 2) }) return occurenceMap}`

We can calculate the least and most common polymers by iterating through the keys of the polymer occurence map.

`const getLeastAndMostCommon = (occurenceMap) => { let leastCommon let mostCommon Object.keys(occurenceMap).forEach((polymer) => { if (!leastCommon || occurenceMap[leastCommon] > occurenceMap[polymer]) { leastCommon = polymer } if (!mostCommon || occurenceMap[mostCommon] < occurenceMap[polymer]) { mostCommon = polymer } }) return { leastCommon, mostCommon }}`

Then we can subtract the occurence of the least most common polymers:

`polymerOccurenceMap[mostCommon] - polymerOccurenceMap[leastCommon]`

The final solution for Day 14 is in my GitHub repository.

Day 14 was tricky and required optimally determining the least and most common letter occurrences in an exponentially growing string. An occurrence hash map storing pairs helped define the next set of occurrences, prevented storing a massive string and made it possible to solve the challenge.

]]>All solutions are in this GitHub repository.

Given a list of `x`

, `y`

coordinates representing dots on a transparent Origami paper along with folding instructions, we must determine the number of visible dots after the first fold.

I broke down the solution as follows:

- read input into a dot coordinate hash map
- read the first instruction
- update dots after folding left
- update dots after folding up
- count dots

In terms of the data structure for the Origami paper, I opted for a hash map that stores dot coordinates. An alternative data structure is a matrix of characters `.`

and `#`

. However, since the problem doesn't specify constraints about the paper size and since the matrix size changes at each step, I deemed this approach more difficult to implement.

The first step of reading the input is separating the dot coordinates from the instructions using the empty line separator `\n\n`

.

`const [dotsString, instructionsString] = data.split('\n\n')`

Next, we add each dot to the hash map using the `x,y`

string as a key and `{x, y}`

as value.

`const dots = {}dotsString.split('\n').map((dot) => { const [x, y] = dot.split(',').map((n) => parseInt(n)) dots[dot] = { x, y }})`

For example, given the `1,7`

coordinates, we store:

`const dots = { '1,7': { x: 1, y: 7 }}`

We can get the first line among instructions using the `split`

function and the 0 index.

`const firstInstruction = instructionsString.split('\n')[0]`

To interpret the folding instruction we need the axis (`x`

or `y`

) and the line coordinate. Thus, we can remove the `fold along`

string.

`const [axis, lineString] = firstInstruction .replace('fold along ', '') .split('=')const line = parseInt(lineString)`

According to the problem, axis `x`

is a vertical line, thus we must fold the paper to the left, whereas given axis `y`

is a horizontal line, we must fold the paper up. We define the `foldLeft`

and `foldUp`

functions next.

`if (axis === 'x') { foldLeft(dots, line)} else { foldUp(dots, line)}`

We must inspect each dot coordinate. We can use the `Object.key`

function to iterate through the elements of the dot hash map. We need to determine whether a dot is on the right half of the paper relative to the folding line. This condition is met when the `x`

coordinate of the dot is larger than the `x`

coordinate representing the line (`x > line`

).

Each dot on the right half of the paper changes its `x`

coordinate after folding the paper by twice the difference between itself and the line. We can express this as `const newX = x - 2 * (line - x)`

For example, the folding line `x=7`

updates the `x`

coordinate of dot `9,1`

from `9`

to `9 - 2 * (9 - 7) = 5`

resulting in dot `5,1`

. We can simplify the expression to `const newX = 2 * line - x`

.

Once we have determined the new coordinates, we can add the new dot to the hash map and delete the dot with old coordinates.

`const foldLeft = (dots, line) => { Object.keys(dots) .forEach((key) => { const { x, y } = dots[key] if (x > line) { const newX = 2 * line - x dots[getKey(newX, y)] = { x: newX, y } delete dots[key] } })}`

To fold up, we can use the same algorithm as for folding to the left, except we update the `y`

coordinate instead of `x`

.

`const foldUp = (dots, line) => { Object.keys(dots) .forEach((key) => { const { x, y } = dots[key] if (y > line) { const newY = 2 * line - y dots[getKey(x, newY)] = { x, y: newY } delete dots[key] } })}`

We can rely on the `Object.keys`

function to count the number of visible dots after folding the Origami paper.

`Object.keys(dots).length`

The final solution for part one is in my GitHub repository.

Part two was about following all folding instructions, printing the final Origami result and revealing eight uppercase characters - the expected output.

To solve part two we can build on top of part one with the following additions:

- fold paper according to all instructions
- print folded Origami paper

Unlike in part one, we must iterate through and follow all folding instructions one by one.

` instructionsString.split('\n').forEach((instruction) => { const [axis, lineString] = instruction.replace('fold along ', '').split('=') const line = parseInt(lineString) if (axis === 'x') { foldLeft(dots, line) } else { foldUp(dots, line) }})`

Once we have the final coordinates of dots on the folded Origami paper, we are able to print it.We need to know the size of the folded Origami paper to be able to print it. Thus, we must find the maximum `x`

and `y`

coordinates.

`const [maxX, maxY] = Object.keys(dots).reduce( ([maxX, maxY], key) => [ Math.max(maxX, dots[key].x), Math.max(maxY, dots[key].y), ], [0, 0])`

Next, for each two coordinate between 0 and the size of the paper, we print either a `#`

if a dot exists with those coordinates, or a space character ` `

.

`for (let y = 0; y <= maxY; y++) { let line = '' for (let x = 0; x <= maxX; x++) { const character = dots[getKey(x, y)] ? '#' : ' ' line += character } console.log(line)}`

The final solution for part two is in my GitHub repository.

Day 13 was about folding a transparent Origami paper, updating coordinates of dots on the paper and revealing a final eight-character message.

]]>All solutions are in this GitHub repository.

Given a set of connections between underground caves, we must find all possible paths between the `start`

and `end`

caves and output the number of found paths. We can visit large caves (named using uppercase letters) multiple times, whereas we can only visit small caves (names using lowercase letters) once.

The strategy for solving part one was a recursive algorithm that returns the number of paths from a given cave to the `end`

cave. At each recursion step, we visit each linked cave of the current cave and recursively return the number of paths from the linked cave to the `end`

cave.

In terms of data structure, I opted for representing the cave graph using an Adjacency List. In other words, a key-value map, where the keys are the cave names and each value is an array corresponding to linked caves. For example, the input from the problem description would produce the following key-value map:

`{ start: [ 'A', 'b' ], A: [ 'start', 'c', 'b', 'end' ], b: [ 'start', 'A', 'd', 'end' ], c: [ 'A' ], d: [ 'b' ], end: [ 'A', 'b' ]}`

This data structure works well for iterating through adjacent caves.

I broke down the solution as follows:

- build the adjacency list from the input file
- get number of paths from the start cave recursively
- prevent visiting small caves twice

The first step to solving part one is reading each line representing a connection between two caves and adding this information to the adjacency list.

`const links = {}const lines = data.split('\n')lines.forEach((line) => { const [caveA, caveB] = line.split('-') addLink(links, caveA, caveB) addLink(links, caveB, caveA)})`

The `addLink`

function adds a connected cave (e.g. `caveB`

) to the corresponding list (e.g. `caveA`

list) or creates a new list if the cave does not exist.

`const addLink = (links, caveA, caveB) => { if (links[caveA]) { links[caveA].push(caveB) } else { links[caveA] = [caveB] }}`

The first call of the recursive `getPaths`

function begins from the `start`

cave. It passes the following parameters:

- the adjacent list
`links`

used for finding connected caves - an array of visited caves containing the
`start`

cave - the cave from which to count paths to the
`end`

cave

`getPaths(links, ["start"], "start")`

The outline for getting the number of paths recursively is the following:

`const getPaths = (links, visitedSmallCaves, cave) => { if (cave === "end") { return 1 } let generatedPaths = 0 links[cave].forEach(linkedCave => { // TODO: if small cave not visited generatedPaths += getPaths(links, visitedSmallCaves, linkedCave) }) return generatedPaths}`

The recursion stop condition is when we reach the `end`

cave. Thus we can return `1`

representing a path found. Alternatively, we might return `0`

if we reach a dead-end (e.g. a small cave connected to already visited small caves).

Each recursion step, we iterate through the connected caves and sum the possible paths to the `end`

cave.

Keep in mind that the outlined algorithm doesn't yet consider visited small caves. Thus, if you run it as is, it will go back and forth between the same two caves until the program throws a stack overflow exception. We'll take a look at visited caves next.

To check whether a cave is small we need to verify whether it has lowercase characters

`const isSmallCave = (cave) => cave.toLowerCase() === cave`

Then we can update the `visitedSmallCaves`

like so:

`const newVisitedArray = isSmallCave(linkedCave) ? [...visitedSmallCaves, linkedCave] : visitedSmallCaves`

We are then able to check whether we have visited a small cave using the `include`

function like so:

`visitedSmallCaves.includes(linkedCave)`

Thus, the updated `getPath`

function contains the following:

`links[cave].forEach(linkedCave => { if (!visitedSmallCaves.includes(linkedCave)) { const newVisitedArray = isSmallCave(linkedCave) ? [...visitedSmallCaves, linkedCave] : visitedSmallCaves generatedPaths += getPaths(links, newVisitedArray, linkedCave) }})`

The final solution for part one is in my GitHub repository.

Part two changes the visiting rules by allowing one to visit a single small cave twice, except the `start`

and `end`

caves, which one can only visit once.

The strategy for solving part two was adding an item called `twice`

to the `visitedSmallCaves`

array after visiting a small cave twice.

I updated the recursive `getPath`

function in order to separate the visiting logic to two separate functions: `hasVisitedCave`

and `getVisitedCaves`

.

` links[cave].forEach(linkedCave => { if (!hasVisitedCave(visitedSmallCaves, linkedCave)) { generatedPaths += getPaths(links, getVisitedCaves(visitedSmallCaves, linkedCave), linkedCave) } })`

The `hasVisitedCave`

function includes the new rules and allows visiting a single small cave that are not names `start`

twice.

`const hasVisitedCave = (visited, cave) => { if (cave === "start") { return true } return visited.includes(cave) && visited.includes("twice")}`

The `getVisitedCaves`

function skips adding large caves, adds the `twice`

element to the array once a small cave has been visited twice and adds small caves that have not been visited before.

`const getVisitedCaves = (visited, cave) => { const isSmall = isSmallCave(cave) if (!isSmall) { return visited } if (visited.includes(cave)) { return [...visited, "twice"] } return [...visited, cave]}`

The final solution for part two is in my GitHub repository.

Day 12 was about finding all paths between two caves recursively. A graph implemented using an Adjacency List helped store connected caves.

]]>All solutions are in this GitHub repository.

Part one aims to count the number of flashes a group of bioluminescent dumbo octopuses produce.

We can break down the solution as follows:

- read the input into a matrix
- outline algorithm for counting flashes for 100 steps
- increment energy levels of all octopuses
- initialize a binary flash map
- recursively mark flashes and update matrix
- sum elements of the binary flash map
- reset energy levels for flashing octopuses

Initializing and traversing a matrix of numbers is done in the same way as described in my previous blog post Advent of Code 2021: Day 9. I suggest you check it out as I won't write about it again here.

We can use a `for`

loop to count flashes for 100 steps.

`let flashesCount = 0for (let i = 0; i < 100; i++) { flashesCount += countFlashes(matrix, rows, cols)}return flashesCount`

The `countFlashes`

function outlines the changes in energy levels at each step and counts the corresponding flashes. As described in the problem, we must first increment the energy level of each octopus. Then, each octopus with an energy level higher than `9`

flashes, which triggers increments energy levels of neighbouring octopuses. We can use a binary flash map to store and count the octopuses that flash in each step.

`const countFlashes = (matrix, rows, cols) => { // increment energy level // initialize binary flash map // recursively flash octopuses with energy level higher than 9 // reset energy level // sum binary flash map}`

The first part of our algorithm is incrementing each number in the matrix by `1`

.

`const countFlashes = (matrix, rows, cols) => { incrementEnergyLevel(matrix) // initialize binary flash map // recursively flash octopuses with energy level higher than 9 // reset energy level // sum binary flash map}`

We can achieve this by traversing the matrix using nested `forEach`

functions.

`const incrementEnergyLevel = (matrix) => { matrix.forEach((row, i) => { row.forEach((energyLevel, j) => { matrix[i][j]++ }) })}`

We can initialize a binary flash map to indicate which octopuses flash at each step. The binary flash map is a matrix of the same size as the octopuses matrix. We initialize each element with 0 and mark flashing octopuses with 1.

`const flashMap = initializeFlashMap(rows, cols)`

We can then traverse the matrix of octopuses to find the ones with an energy level higher than 9. Once found, we mark that octopus and potentially neighbouring octopuses as flashing using the recursive `flash`

function.

`const countFlashes = (matrix, rows, cols) => { incrementEnergyLevel(matrix) const flashMap = initializeFlashMap(rows, cols) matrix.forEach((row, i) => { row.forEach((energyLevel, j) => { if (energyLevel > 9) { flash(matrix, flashMap, rows, cols, i, j) } }) }) // reset energy level // sum binary flash map}`

The recursive `flash`

function marks the current octopus as flashing by setting `flashMap[i][j] = 1`

. Then goes through each neighbour to determine whether it also needs to flash. The stop condition for this recursion is when we reach an element that has already flashed.

`const flash = (matrix, flashMap, rows, cols, i, j) => { if (flashMap[i][j] === 1) { return } flashMap[i][j] = 1 const neighbours = getNeighbours(rows, cols, i, j) neighbours.forEach(({ ni, nj }) => { matrix[ni][nj]++ if (matrix[ni][nj] > 9) { flash(matrix, flashMap, rows, cols, ni, nj) } })}`

To find the neighbours of an octopus, I used the following relative indices as shown in the diagram.

`const getNeighbours = (rows, cols, i, j) => { const neighbours = [] for (let ni = i - 1; ni <= i + 1; ni++) { for (let nj = j - 1; nj <= j + 1; nj++) { if (ni === i && nj === j) { continue } neighbours.push({ ni, nj }) } } return neighbours}`

The algorithm, however, does not take into account elements on the edge of the matrix. We can use `Math`

functions to adjust the limits of the `for`

loops to accommodate edges.

`const getNeighbours = (rows, cols, i, j) => { const neighbours = [] for (let ni = Math.max(0, i - 1); ni <= Math.min(rows - 1, i + 1); ni++) { for (let nj = Math.max(0, j - 1); nj <= Math.min(cols - 1, j + 1); nj++) { if (ni === i && nj === j) { continue } neighbours.push({ ni, nj }) } } return neighbours}`

We can traverse the binary flash matrix using `forEach`

function to sum each element. Since flashing octopuses are marked with `1`

and all other octopuses with `0`

, this function counts the number of flashing octopuses.

`const sumFlashMap = (flashMap) => { let sum = 0 flashMap.forEach((row) => { row.forEach((flash) => { sum += flash }) }) return sum}`

To reset the energy levels for flashing octopuses we need to traverse the octopus matrix and set the octopuses with energy larger than `9`

to `0`

.

`const resetEnergyLevel = (matrix) => { matrix.forEach((row, i) => { row.forEach((energyLevel, j) => { if (energyLevel > 9) { matrix[i][j] = 0 } }) })}`

The final solution for part one is in my GitHub repository.

Part two was about finding the step when all octopuses flash simultaneously. Since we already implemented a function that counts flashes on each step, we can run the program and increment the step until all 100 octopuses flash.

`let step = 1while (countFlashes(matrix, rows, cols) !== 100) { step++}return step`

The final solution for part two is in my GitHub repository.

Day 11 was about traversing a matrix of octopuses, increasing their energy level with each step, recursively flashing octopuses and their neighbours, and finally, determining when all octopuses flash simultaneously.

]]>All solutions are in this GitHub repository.

Given a set of lines with opening and closing brackets, we must find syntax errors and calculate their score.

My strategy for solving this challenge is using a FILO (First In, Last Out) data structure that stores open brackets. We add new open brackets to the array. If a bracket is closed correctly, we pop the last open bracket from the array. Otherwise, if the closing bracket does not correspond to the previous open bracket, we have found a syntax error, and we can calculate its score.

We can break down the solution like so:

- outline the algorithm that sums syntax error scores
- add open brackets to the FILO data structure
- remove open brackets when the corresponding closing bracket matches
- calculate illegal bracket score when the corresponding closing bracket does not match

`const lines = data.split('\n')return lines.reduce((sum, line) => sum + getErrorScore(line), 0)`

`const getErrorScore = (line) => { const brackets = line.split('') let score = 0 brackets.some(bracket => { // If there is a syntax error overwrite score }) return score}`

To check whether we reached an open bracket we can use the `include`

function on the open brackets array like so:

`const isOpeningBracket = (bracket) => ["{", "[", "(", "<"].includes(bracket)`

We can then use the `isOpeningBracket`

function to determine whether to add a new item to the `openBrackets`

FILO data structure.

`const getErrorScore = (line) => { const brackets = line.split('') const openBrackets = [] let score = 0 brackets.some(bracket => { if (isOpeningBracket(bracket)) { openBrackets.push(bracket) } // If there is a syntax error overwrite score }) return score}`

We can define matching brackets like so:

`const matchingBrackets = { "(": ")", "[": "]", "{": "}", "<": ">"}`

We can use the following function to check whether a closing bracket matches an existing open bracket. We also check whether the opening bracket exists because a closing bracket written before any opening bracket represents a syntax error.

`const isCorrectClosingBracket = (openingBracket, closingBracket) => { if (!openingBracket) { return false } return matchingBrackets[openingBracket] === closingBracket}`

We can use the `pop`

function to remove the last element in the FILO structure.

`if (isOpeningBracket(bracket)) { openBrackets.push(bracket)} else if(isCorrectClosingBracket(openBrackets[openBrackets.length - 1], bracket)) { openBrackets.pop()}`

The illegal bracket has a defined score as follows:

`const illegalBracketScoring = { ")": 3, "]": 57, "}": 1197, ">": 25137}`

We can pop the last element of the `openBrackets`

regardless of whether the closed bracket is correct or not. Thus, we can rewrite the conditions from the `getErrorScore`

function as follows:

`let score = 0brackets.some(bracket => { if (isOpeningBracket(bracket)) { openBrackets.push(bracket) } else if(!isCorrectClosingBracket(openBrackets.pop(), bracket)) { score = illegalBracketScoring[bracket] return true }})return score`

The final solution for part one is in my GitHub repository.

Part two is about filtering the corrupted lines identified in part one and focusing on the incomplete lines. The goal is to find the missing brackets of incomplete lines and calculate a score corresponding to each completed line. We must sort all scores to return the expected score in the middle.

We can reuse a great deal of the solutions from part one with a few additions:

- filter corrupted lines using the error score
- use opening brackets to calculate the completion score
- find the middle completion score

We can reuse the `getErrorScore`

function from part one, with a few adjustments:

- remove the score calculation since it is unnecessary for part two
- return a boolean
`hasError`

set to`true`

when the line has a syntax error - return the array of open brackets
- rename function to
`getOpeningBrackets`

to better fit the goal of part two

`const getOpeningBrackets = (line) => { const brackets = line.split('') const openingBrackets = [] const hasError = brackets.some((bracket) => { if (isOpeningBracket(bracket)) { openingBrackets.push(bracket) } else if (!isCorrectClosingBracket(openingBrackets.pop(), bracket)) { return true } }) return [hasError, openingBrackets]}`

We can call the `getOpeningBrackets`

function to filter the lines with syntax errors. The `reduce`

function returns the array of completed line scores by adding a calculated score for each line without a syntax error.

`const completedLineScores = lines.reduce((completed, line) => { const [hasError, openingBrackets] = getOpeningBrackets(line) if (hasError) { return completed } else { return [...completed, getCompletionScore(openingBrackets)] }}, [])`

Each closing bracket has the following score:

`const completionScoring = { ')': 1, ']': 2, '}': 3, '>': 4,}`

We must traverse open brackets in reverse order to find the matching closing bracket and calculate its corresponding score.

`const getCompletionScore = (openingBrackets) => { return openingBrackets .reverse() .reduce( (score, bracket) => score * 5 + completionScoring[matchingBrackets[bracket]], 0 )}`

Once we have sorted the array of scores, we can determine the middle score using the middle of the array length as index.

`completedLineScores.sort((a, b) => a - b)const middleIndex = Math.floor(completedLineScores.length / 2)return completedLineScores[middleIndex]`

The final solution for part two is in my GitHub repository.

Day 10 was about processing lines of brackets, filtering lines with syntax errors and completing lines without errors. A FILO (First In, Last Out) data structure helped match open and close brackets.

]]>All solutions are in this GitHub repository.

The goal of part one is to find the low points of the given matrix of heights. Then we must calculate the risk level by summing the heights of low points and adding 1 for each low point.

We can break down the solution as follows:

- read the input into a matrix
- traverse the matrix
- check if a height is a low point by comparing it to each neighbour
- calculate the risk level

Storing elements into a matrix is a good choice since the problem requires checking neighbouring elements. The `split`

function adds the data into rows and columns. The `parseInt`

function converts each digit from a string to a number.

`const heightMatrix = data .split('\n') .map((row) => row.split('').map((height) => parseInt(height)))`

We can traverse the matrix using nested `for`

loops or `forEach`

functions.

`heightMatrix.forEach((row, i) => { row.forEach((height, j) => { // compare height with neighbours })})`

The following diagram illustrates the indices of the four neighbours of an element in a matrix.We can determine if a height is a low point by comparing it to each neighbour. If it is higher or equal to any neighbouring heights, it is not a low point (`return false`

). Otherwise, we found a low point (`return true`

).

`const isLowPoint = (matrix, i, j) => { if (matrix[i][j] >= matrix[i - 1][j]) { return false } if (matrix[i][j] >= matrix[i + 1][j]) { return false } if (matrix[i][j] >= matrix[i][j - 1]) { return false } if (matrix[i][j] >= matrix[i][j + 1]) { return false } return true}`

The algorithm, however, does not account for elements on the edge of the matrix. Consider the number of rows and columns defined according to the height matrix lengths:

`const rows = heightMatrix.lengthconst cols = heightMatrix[0].length`

Elements on the edge of the matrix don't have all neighbours, so we must add the following rules:

- on the first row
`i === 0`

, so we should not check the top neighbour`i - 1`

unless`i - 1 >= 0`

- on the last row
`i === rows - 1`

, so we should not check the bottom neighbour`i + 1`

unless`i + 1 < rows`

- on the first column
`j === 0`

, so we should not check the left neighbour`j - 1`

unless`j- 1 >= 0`

- on the last column
`j === cols - 1`

, so we should not check the right neighbour`j + 1`

unless`j + 1 < cols`

So, we can rewrite the checks of the `isLowPoint`

function as follows:

`const isLowPoint = (matrix, rows, cols, i, j) => { if (i - 1 >= 0 && matrix[i][j] >= matrix[i - 1][j]) { return false } if (i + 1 < rows && matrix[i][j] >= matrix[i + 1][j]) { return false } if (j - 1 >= 0 && matrix[i][j] >= matrix[i][j - 1]) { return false } if (j + 1 < cols && matrix[i][j] >= matrix[i][j + 1]) { return false } return true}`

Finally, we can calculate the risk level by adding each newly found low point height and incrementing the total risk level by 1.

`let riskLevel = 0heightMatrix.forEach((row, i) => { row.forEach((height, j) => { if (isLowPoint(heightMatrix, rows, cols, i, j)) { riskLevel += 1 + height } })})`

The final solution for part one is in my GitHub repository.

The goal of part two is to calculate the size of each basin surrounding a low point. Basins are a group of neighbouring heights delimited by heights of `9`

since `9`

is not part of any basin. We need to multiply the three largest basin sizes to solve the challenge.

I wrote a solution using the following top-down approach:

- add basin size for each low point into an array
- sort the array to find and multiply the three largest basin sizes
- create a binary basin map
- calculate basin size based on binary basin map
- mark basin map recursively

I adjusted the solution from part one to calculate and add basin size to an array whenever we find a low point.

`const basinSizes = []heightMatrix.forEach((row, i) => { row.forEach((height, j) => { if (isLowPoint(heightMatrix, rows, cols, i, j)) { basinSizes.push(findBasinSize(heightMatrix, rows, cols, i, j)) } })})`

We can then outline the `findBasinSize`

function like so:

`const findBasinSize = (matrix, rows, cols, i, j) => { const basinMap = initializeBasinMap(rows, cols) markBasin(matrix, basinMap, rows, cols, i, j) return calculateBasinSize(basinMap)}`

We can sort the array of basin sizes in descending order to find the three largest sizes. We can then return the product of those sizes.

`basinSizes.sort((a, b) => b - a)return basinSizes[0] * basinSizes[1] * basinSizes[2]`

The binary basin map is a matrix of the same size as the height matrix. We initialize each element with `0`

and mark the ones belonging to the basin with `1`

.

`const initializeBasinMap = (rows, cols) => { const basinMap = new Array(rows) for (let k = 0; k < rows; k++) { basinMap[k] = new Array(cols).fill(0) } return basinMap}`

We can write a recursive algorithm that starts from the low point, marks it on the basin map, then traverses all neighbours and marks them on the basin map. The recursion stop condition is when reaching an element with height `9`

or an element already marked on the map.

`const markBasin = (matrix, basinMap, rows, cols, i, j) => { if (matrix[i][j] === 9 || basinMap[i][j] === 1) { return } basinMap[i][j] = 1 if (i - 1 >= 0) { markBasin(matrix, basinMap, rows, cols, i - 1, j) } if (i + 1 < rows) { markBasin(matrix, basinMap, rows, cols, i + 1, j) } if (j - 1 >= 0) { markBasin(matrix, basinMap, rows, cols, i, j - 1) } if (j + 1 < cols) { markBasin(matrix, basinMap, rows, cols, i, j + 1) }}`

The first call contains the low point indices

`markBasin(matrix, basinMap, rows, cols, i, j)`

Once we have marked each height belonging to the current basin map, we can sum all the basin map values to find the basin's size.

`const calculateBasinSize = (basinMap) => { let size = 0 basinMap.forEach((row) => { row.forEach((value) => { size += value }) }) return size}`

The final solution for part two is in my GitHub repository.

Day 9 was about traversing a matrix of heights, finding low points by comparing each element to its neighbours in the matrix and recursively finding a basin of adjacent heights.

]]>All solutions are in this GitHub repository.

The problem is about decoding digits represented using a seven-segment display. Each segment has a name from `a`

to `g`

. By default, if we name the segments in order, the digits from `0`

to `9`

are represented by the segments `abcefg`

, `cf`

, `acdeg`

, `acdfg`

, `bcdf`

, `abdfg`

, `abdefg`

, `acf`

, `abcdefg`

and `abcdfg`

.

The problem is we don't know which letter corresponds to which segment. We must deduct it from a set of ten unique segment patterns representing digits from 0 to 9 given in random order. For example, given the pattern

`acedgfb cdfbe gcdfa fbcad dab cefabd cdfgeb eafb cagedb ab`

we must find a way to deduce the name of each segment. The segment mapping from the example is illustrated below.

The pattern from the example represents the following digits:

`acedgfb: 8 cdfbe: 5 gcdfa: 2 fbcad: 3 dab: 7 cefabd: 9 cdfgeb: 6 eafb: 4 cagedb: 0 ab: 1`

We are given multiple lines, each containing ten unique signal patterns, a delimiter and four output values. For example,

`acedgfb cdfbe gcdfa fbcad dab cefabd cdfgeb eafb cagedb ab | cdfeb fcadb cdfeb cdbaf`

The goal is to count the number of four-digit output values representing either `1`

, `4`

, `7`

and `8`

. The problem description hints that these digits have a unique number of segments. Indeed only `1`

has two segments, only `4`

has four segments, only `7`

has three segments and only `8`

has seven segments.

To solve part one we must:

- read the output values
- check if any output value has a unique length
- sum all output values with unique length

Since we only need the output value for part one, we can extract them from each line using the delimiter `|`

.

`const lines = data.split('\n')const outputValues = lines.map((line) => line.split(' | ')[1])`

Then to extract each digit into an array we can use the space as separator.

`const digits = outputValue.split(" ")`

Since `1`

, `4`

, `7`

, `8`

are represented using `2`

, `4`

, `3`

, `7`

segments respectively, we can check whether the length of the digit is one of those numbers. The `includes`

function is useful for performing this check.

`[2, 4, 3, 7].includes(digit.length)`

We can use the `reduce`

function to count output values respecting the condition previously defined.

`digits.reduce((digitCount, digit) => { if ([2, 4, 3, 7].includes(digit.length)) { return digitCount + 1 } return digitCount}, 0)`

Then, we can use the `reduce`

function again to sum digits from all lines.

`return outputValues.reduce((digitCount, outputValue) => { const digits = outputValue.split(" ") return digitCount + digits.reduce((count, digit) => { if([2, 4, 3, 7].includes(digit.length)) { return count + 1 } return count },0)}, 0)`

The final solution for part one is in my GitHub repository.

The second part of the problem was the tricky one. The goal of the second part was to decode all output values, from a four-digit number and sum all four-digit numbers.

I used a top-down approach where I wrote higher-level functions first and, thus defined the overall structure of the program, then defined the lower-level functions and went into detail. Here is how I broke down the problem:

- sum all output numbers
- decode output number
- find segment mapping
- reveal corresponding digits using segment map
- find output value based on revealed digits

The trickiest part was finding the segment mapping, in other words, identifying the name of each segment. The strategy was:

- calculating segment occurrences in all ten unique patterns
- identifying the segments with unique occurrences
- identifying digits with unique segment length
- use a process of elimination

We can outline the solution as follows. We define the `getOutput`

function afterwards.

`const solve = (data) => { const lines = data.split('\n') return lines.reduce((sum, line) => sum + getOutput(line), 0)}`

We can further outline the solution:

`const getOutput = (line) => { const [segmentPatternsString, outputValuesString] = line.split(' | ') const segmentMap = findSegmentNamesMap(segmentPatternsString) const correspondingSevenSegmentDisplay = getSevenSegmentDisplay(segmentMap) return getOutputDigits(outputValuesString, correspondingSevenSegmentDisplay)}`

Let's look at an example line. Given the line

`acedgfb cdfbe gcdfa fbcad dab cefabd cdfgeb eafb cagedb ab | cdfeb fcadb cdfeb cdbaf`

the `segmentPatternsString`

is

`acedgfb cdfbe gcdfa fbcad dab cefabd cdfgeb eafb cagedb ab`

and `outputValuesString`

is

`cdfeb fcadb cdfeb cdbaf`

The `findSegmentNamesMap`

function returns the following mapping:

`{ a: 'd', b: 'e', c: 'a', d: 'f', e: 'g', f: 'b', g: 'c',}`

The `getSevenSegmentDisplay`

function maps each digit based on the default seven segment display.So, the default seven-segment display

`[ 'abcefg', 'cf', 'acdeg', 'acdfg', 'bcdf', 'abdfg', 'abdefg', 'acf', 'abcdefg', 'abcdfg']`

becomes the following based on the segment names map

`['abcdeg', 'ab', 'acdfg', 'abcdf', 'abef', 'bcdef', 'bcdefg', 'abd', 'abcdefg', 'abcdef']`

Notice the letters are sorted alphabetically. This makes identifying the output digits easier.The `getOutputDigits`

maps the `outputValuesString`

to `5353`

.

We can start by initializing the segment name map:

`const findSegmentNamesMap = (segmentPatternsString) => { const segmentMap = { a: '', b: '', c: '', d: '', e: '', f: '', g: '' } return segmentMap}`

Using the default segment names we have the digits from `0`

to `9`

represented as:

`const sevenSegmentDisplay = [ 'abcefg', 'cf', 'acdeg', 'acdfg', 'bcdf', 'abdfg', 'abdefg', 'acf', 'abcdefg', 'abcdfg']`

and each letter occurs as follows

`const abcdefgOccurence = { a: 8, b: 6, c: 8, d: 7, e: 4, f: 9, g: 7 }`

Thus, the `b`

, `e`

and `f`

segments occur a unique number of times. We can use this information to determine the segment name mapping for these values.

`const findSegmentNamesMap = (segmentPatternsString) => { const segmentMap = { a: '', b: '', c: '', d: '', e: '', f: '', g: '' } const letterOccurence = calculateLetterOccurence(segmentPatternsString) segmentMap.b = findLetter(letterOccurence, abcdefgOccurence.b) segmentMap.e = findLetter(letterOccurence, abcdefgOccurence.e) segmentMap.f = findLetter(letterOccurence, abcdefgOccurence.f) return segmentMap}`

Part one hinted at using the length of digits. We can deduce `a`

, `c`

and `d`

.

To find the segment corresponding to `a`

, we can identify numbers `1`

and `7`

in the `segmentPatternsString`

. We are looking for the letter not present in `1`

. For example, given `ab`

and `abd`

we can determine that `d`

is the name corresponding to segment `a`

.

`const findA = (segmentPatternsString) => { const one = findDigitBySignalsLength(segmentPatternsString, 2) const seven = findDigitBySignalsLength(segmentPatternsString, 3) let a = seven one.split('').forEach((letter) => { a = a.replace(letter, '') }) return a}`

To find the segment corresponding to `c`

, we can identify the number `1`

in the `segmentPatternsString`

and eliminate the previously found segment corresponding to `f`

. For example, given `ab`

and the previously found `b`

corresponding to `f`

, we can deduce that `a`

is the name corresponding to `c`

.

`const findC = (segmentPatternsString, letterF) => { const one = segmentPatternsString .split(' ') .find((digit) => digit.length === 2) return one.replace(letterF, '')}`

To find the segment corresponding to `d`

, we can identify the number`4`

in the `segmentPatternsString`

and eliminate the previously found segment corresponding to `b`

, `c`

and `f`

. For example, given `eafb`

, and the previously found `e`

corresponding to `b`

, `a`

corresponding to `c`

and `b`

corresponding to `f`

, we can deduce that `f`

is the name corresponding to `d`

.

`const findD = (segmentPatternsString, signalMap) => { const four = findDigitBySignalsLength(segmentPatternsString, 4) return four .replace(signalMap.b, '') .replace(signalMap.c, '') .replace(signalMap.f, '')}`

After mapping `a`

, `b`

, `c`

, `d`

, `e`

and `f`

, we can determine the segment corresponding to `g`

by process of elimination.

`const findG = (signalMap) => { let allLetters = 'abcdefg' Object.keys(signalMap).forEach( (key) => (allLetters = allLetters.replace(signalMap[key], '')) ) return allLetters}`

The final solution for part two is in my GitHub repository.

Day 8 was really challenging and entailed decoding numbers by identifying unique occurrences, segment lengths and by process of elimination.

]]>All solutions are in this GitHub repository.

The alignment position must be between the minimum and maximum horizontal submarine positions. We can then calculate the needed fuel for each alignment position between these thresholds.

We can break down the solution as follows:

- read positions from input
- find the minimum and maximum positions
- calculate the fuel corresponding to each alignment
- find the minimum fuel

The `readInput`

function created on Day 1 is useful for getting the file `data`

as string. We can then map this data to a number array.

`const positions = data.split(',').map((n) => Number(n))`

Since solving Advent of Code challenges, I have noticed the `reduce`

function is very handy for many calculations based on elements in an array. Finding the minimum and maximum number in an array is one of those cases. I also relied on `Math.min`

and `Math.max`

to avoid writing conditional statements, thus additional lines of code.

`const [minAlignPos, maxAlignPos] = positions.reduce( ([min, max], pos) => [Math.min(min, pos), Math.max(max, pos)], [positions[0], positions[0]])`

We can iterate through each alignment between the maximum and minimum positions using a for loop.

`for (let alignPos = minAlignPos; alignPos <= maxAlignPos; alignPos++) { const fuel = calculateFuel(positions, alignPos)}`

We can use the reduce function again to calculate the fuel required for aligning all positions. We must sum the differences between each position and the alignment position.

`const calculateFuel = (positions, alignPos) => { return positions.reduce((fuel, pos) => fuel + Math.abs(pos - alignPos), 0)}`

After calculating the fuel for each alignment, we can determine the minimum fuel needed by comparing each fuel amount with the `minFuel`

variable and updating it accordingly.

`let minFuelfor (let alignPos = minAlignPos; alignPos <= maxAlignPos; alignPos++) { const fuel = calculateFuel(positions, alignPos) if (!minFuel || fuel < minFuel) { minFuel = fuel }}return minFuel`

The final solution for part one is in my GitHub repository.

In part two, the problem describes a different way of calculating the fuel. Instead of a constant amount of fuel, the required fuel increases with 1 unit each step. Thus, aligning `n`

steps would require `1 + 2 + 3 + ... + n`

amount of fuel. We can use Gauss's summation and calculate the steps as follows `n * ( n + 1 ) / 2`

.

`const calculateFuelForSteps = (n) => (n * (n + 1)) / 2const calculateFuel = (positions, alignPos) => { return positions.reduce( (fuel, pos) => fuel + calculateFuelForSteps(Math.abs(pos - alignPos)), 0 )}`

The final solution for part two is in my GitHub repository.

We found the most fuel-efficient way of aligning a set of crab submarines given their horizontal positions. I had fun reading and solving the challenge of Day 7 and am looking forward to Day 8!

]]>All solutions are in this GitHub repository.

We have an initial set of fish with given internal times until they multiply. According to the problem, after creating a new fish, each fish resets their internal time to 6 days since they need a week to make a new fish. A new fish starts with an internal time of 9 days until multiplying the first time. The challenge is to calculate the amount of fish after 80 days.

We can solve the problem with the following steps:

- read the input
- update the state of each fish after a day
- add new fish
- repeat for 80 days and count fish

The input contains comma-separated numbers, which represent the state of each fish. We can use an array to store each fish as a number.

`let fish = data.split(',').map((n) => Number(n))`

Without considering new fish for now, after a day, each fish's internal time decreases by one until it reaches 0, at which point it resets to 6.

`const newFishPopulation = fish.map((f) => { if (f === 0) { return 6 } return f - 1})`

We can count the amount of fish that reset to 6 and add that amount to the list of new fish population.

`const getNewFishPopulation = (fish) => { let newFishCount = 0 const newFishPopulation = fish.map((f) => { if (f === 0) { newFishCount++ return 6 } return f - 1 }) newFishPopulation.push(...Array(newFishCount).fill(8)) return newFishPopulation}`

The challenge is to find the amount of fish after 80 days, so we can write a for loop which would update the fish array every day.

` for (let day = 1; day <= 80; day++) { fish = getNewFishPopulation(fish) } return fish.length`

You can see the final solution for part one on my GitHub repository.

Part two is about optimizing the algorithm to calculate the fish population after 256 days. The solution from part one is not efficient in terms of memory. It exceeds the maximum number of elements allowed in an array, thus throwing the following range error.

`fish.push(...Array(newFishCount).fill(8)) ^RangeError: Maximum call stack size exceeded`

While thinking about this problem, it occurred to me that fish with the same state produces the same amount of ancestors. For example, two fishes left with three days until reproduction create the same amount of ancestors after 80 or any amount of days. Thus, it would be enough to keep track of the amount of fish in a given state each day. So I opted for an occurrence array with nine elements, where each index represents the state of days until reproduction and stores the amount of fish with that corresponding state.

We can break the problem like this:

- initialize occurrence array from input
- shift occurrence array one position
- add fish corresponding to state 6
- add new fish
- repeat for 256 days and count fish

`import { URL, fileURLToPath } from 'url'import { readInput } from '../utils/readInput.js'const sumFish = (occurenceArray) => { return occurenceArray.reduce((sum, fishCount) => sum + fishCount, 0)}const solve = (data) => { const fish = data.split(',').map((n) => Number(n)) const fishOccurence = Array(9).fill(0) fish.forEach((f) => fishOccurence[f]++) for (let day = 1; day <= 256; day++) { const newFishCount = fishOccurence.shift() fishOccurence[6] += newFishCount fishOccurence.push(newFishCount) } return sumFish(fishOccurence)}const inputPath = fileURLToPath(new URL('./input.txt', import.meta.url))const data = await readInput(inputPath)console.log(solve(data))`

Day 6 offered a challenge in terms of optimizing an algorithm. Although it stored duplicate elements, a simple solution worked well for part one. However, part two required using an occurrence array instead.

]]>All solutions are in this GitHub repository.

The input contains a set of lines defined using x, y coordinates of two points in the following format `x1,y1 => x2,y2`

. The goal is to find how many points consisting of integer coordinates overlap. The first part requires only checking points overlapping horizontal or vertical lines.

We can break the problem as follows:

- read lines from the input file
- find vertical and horizontal lines
- store points in a hash map
- count overlapping points

We can reuse the `readInput`

function written on Day 1. If you haven't already, give Advent of Code: Day 1 a read!After extracting the data as a string, we split it into lines using the newline separator. Then, we map each line to an array containing the coordinates as numbers. So the line `x1,y1 -> x2,y2`

is mapped to `[x1, y1, x2, y2]`

`const lines = data.split('\n').map((line) => line .replace(' -> ', ',') .split(',') .map((n) => Number(n)))`

We can then iterate through the lines and destructure the coordinates.

`lines.forEach((line) => { const [x1, y1, x2, y2] = line})`

We must only consider points from vertical or horizontal lines in the first part of the problem. Vertical lines have the exact x coordinates, whereas horizontal lines have the same y coordinates.

`const isVertical = x1 === x2const isHorizontal = y1 === y2`

We can find the points on a vertical line using either x1 or x2 coordinate and all numbers between y1 and y2. Since we don't know which one is higher, we can calculate the minimum and maximum using `Math.min(y1, y2)`

and `Math.max(y1, y2)`

respectively. Then, using a for loop we can determine all integer values from the minimum to the maximum value.

`if (isVertical) { const minY = Math.min(y1, y2) const maxY = Math.max(y1, y2) for (let y = minY; y <= maxY; y++) { addPointToHashMap(pointHashMap, x1, y) }}`

Using the same approach, we can determine the points on horizontal lines.

`if (isHorizontal) { const minX = Math.min(x1, x2) const maxX = Math.max(x1, x2) for (let x = minX; x <= maxX; x++) { addPointToHashMap(pointHashMap, x, y1) }}`

Since the problem does not define maximum coordinate value, storing the points using a two-dimensional matrix is not the most efficient memory-wise. Instead, I opted for a hash map where keys are strings in `x,y`

format and values are the number of overlapping points.

We initialize the point hash map with an empty object.

`const pointHashMap = {}`

Then, we create new keys and initialize them with 1 when we find a new point, or increase the value of that point when the point already exists in the map.

`const addPointToHashMap = (hashMap, x, y) => { const key = `${x},${y}` if (hashMap[key]) { hashMap[key]++ } else { hashMap[key] = 1 }}`

Finally, we need to count overlapping points, in other words, points with a value larger than `2`

.

The `Object.keys(hashMap)`

function gives an iterable array of the points. We can then use the `Array.reduce`

function to calculate the total amount of overlapping points.

`const countOverlappingPoints = (hashMap) => { const points = Object.keys(hashMap) return points.reduce((total, point) => { if (hashMap[point] > 1) { return total + 1 } return total }, 0)}`

We call the function with the previously defined `pointHashMap`

.

`countOverlappingPoints(pointHashMap)`

You can see the final solution for part one on my GitHub repository.

In part two, we must consider all lines. Fortunately, the problem gives an input constraint where all lines are horizontal, vertical or diagonal at precisely 45 degrees.

We can reuse most of the solution from part one, with the following additions:

- find diagonal lines
- find the points on diagonal lines

Given the input constraint, all lines that are not horizontal neither vertical must be diagonal.

`const isDiagonal = !isHorizontal && !isVertical`

According to the problem,

An entry like 1,1 -> 3,3 covers points 1,1, 2,2, and 3,3.An entry like 9,7 -> 7,9 covers points 9,7, 8,8, and 7,9.

So if we get the ranges of numbers between each coordinate `x1 -> x2`

and `y1 -> y2`

, we can determine the points on the diagonals.

`if (isDiagonal) { const xRange = range(x1, x2) const yRange = range(y1, y2) xRange.forEach((x, index) => { const y = yRange[index] addPointToHashMap(pointHashMap, x, y) })}`

Given two x or y coordinates (e.g. `x1 = 9`

, `x2 = 7`

), we can generate an array of steps (e.g. `0, 1, 2`

) corresponding to each number in the range. Then depending on whether coordinates increase or decrease, we can build the range as follows. For example, the coordinates `x1 = 9`

and `x2 = 7`

decrease. Thus, we decrease the number with each step (`c1 - p`

).

`const range = (c1, c2) => { const pointsCount = Math.abs(c1 - c2) + 1 return [...Array(pointsCount).keys()].map((p) => { if (c1 < c2) { return c1 + p } return c1 - p })}`

You can see the final solution for part two on my GitHub repository.

Day 5 was about finding overlapping points from a set of given lines. A hash map helped store points and determine how many lines overlap. JavaScript functions like `Math.abs`

, `Math.min`

, `Math.max`

as well as `Object.keys`

and `Array.reduce`

were quite handy.

All solutions are in this GitHub repository.

We need to find the first winning bingo board and calculate the board's score. A winning bingo board contains an entire row or column of extracted random numbers.

The strategy of breaking down the problem into smaller, more manageable chunks worked well for day 3, so let's explore the steps for solving the fourth challenge:

- read the input file
- mark bingo numbers on each board
- check if a row is complete
- check if a column is complete
- calculate the score

The input contains the extracted random numbers on the first line, followed by 5 x 5 boards of numbers separated by empty lines. We can extract the bingo numbers and boards as strings using the double Newline separator.

`const [bingoNumbersAsString, ...boardsAsString] = data.split('\n\n')`

We can extract the array of bingo numbers by using the `,`

as a separator and parsing each number.

`const bingoNumbers = bingoNumbersAsString .split(',') .map((numberAsString) => Number(numberAsString))`

Next, we must store board numbers in a data structure. I opted to keep each board as an array of 25 objects containing the number and a `marked`

boolean.

`{ number: 4, marked: false,}`

The `marked`

property is set to `false`

by default and `true`

whenever the number matches the played bingo number. A matrix with objects of the same structure would have worked well too. However, I thought it would be easier to traverse an array to find the matching bingo number rather than a matrix.

Here is the mapping of the input to the desired data structure.

`const boards = boardsAsString.map((boardAsString) => boardAsString .replaceAll('\n', ' ') .split(' ') .filter(Boolean) .map((numberAsString) => ({ number: Number(numberAsString), marked: false, })))`

All new lines are concatenated separated by a space using `.replaceAll('\n', ' ')`

, then `.split(' ')`

creates an array using the space character as a separator. Since some of the numbers have additional spaces, some of the items in the array are empty strings. The `.filter(Boolean)`

function removes all the empty strings from the array. Finally, each number is parsed and mapped into the data structure.

We can use the `forEach`

function to iterate through bingo numbers and boards. However, `forEach`

goes through all elements, whereas we should stop whenever we find a winning board. The `some`

function returns true when finding a winning board.

`bingoNumbers.some((bingoNumber) => { return boards.some((board) => { // mark the bingo number on the board // determine if it is a winning board return isWinner })`

To find and mark the bingo number on a board we can use the `find`

function and set the marked property to `true`

when the board contains the number.

`bingoNumbers.some((bingoNumber) => { return boards.some((board) => { const bingoNumberOnBoard = board.find((item) => item.number === bingoNumber) if (!bingoNumberOnBoard) { return } bingoNumberOnBoard.marked = true // determine if it is a winning board return isWinner })`

To determine whether the board is winning, we must check if the row or column of the marked number is now complete. Both checks require the index of the `bingoNumberOnBoard`

.

`const bingoNumberIndexOnBoard = board.findIndex(item) => item === bingoNumberOnBoard)`

Given that each board is 5 x 5, we can use the index of the bingo number in the array to determine indices of the elements of the row. Each board has the following indices:

` 0 1 2 3 4 5 6 7 8 910 11 12 13 1415 16 17 18 1920 21 22 23 24`

We can determine the indices of the row of the bingo number with the formula `rowIndex * 5 + columnIndex`

. The `rowIndex`

is the quotient of the bingo number index divided by `5`

and the column indices are always `0`

, `1`

, `2`

, `3`

, `4`

. For example, for the index 13, the row index is `13 5`

which is `2`

, thus the indices are `2 * 5 + 0`

(`10`

), `2 * 5 + 1`

(`11`

), `2 * 5 + 2`

(`12`

), `2 * 5 + 3`

(`13`

) and `2 * 5 + 4`

(`14`

).

Then, we only need to count how many marked elements there are on the row and return `true`

if there are `5`

marked elements and `false`

otherwise.

`const checkRow = (board, bingoNumberIndexOnBoard) => { const rowIndex = Math.floor(bingoNumberIndexOnBoard / 5) const reducer = (markedTotal, columnIndex) => { const boardIndex = rowIndex * 5 + columnIndex if (board[boardIndex].marked) { return markedTotal + 1 } return markedTotal } const markedTotal = [0, 1, 2, 3, 4].reduce(reducer, 0) return markedTotal === 5}`

In the `solve`

function we can call the `checkRow`

function like this:

`const isRow = checkRow(board, bingoNumberIndexOnBoard)`

The column check is calculated in a similar manner to the row, except we determine the indices slightly differently. The row indices are always `0`

, `1`

, `2`

, `3`

, `4`

and the column index is the remainder of the bingo number index divided by `5`

.

`const checkColumn = (board, bingoNumberIndexOnBoard) => { const columnIndex = bingoNumberIndexOnBoard % 5 const reducer = (markedTotal, rowIndex) => { const boardIndex = rowIndex * 5 + columnIndex if (board[boardIndex].marked) { return markedTotal + 1 } return markedTotal } const markedTotal = [0, 1, 2, 3, 4].reduce(reducer, 0) return markedTotal === 5}`

In the `solve`

function we can call the `checkColumn`

function like this:

`const isColumn = checkColumn(board, bingoNumberIndexOnBoard)`

The score is calculated by summing all the unmarked numbers and multiplying the sum with the bingo number. We can use the `reduce`

function as follows:

`const calculateScore = (board, bingoNumber) => { const boardScore = board.reduce((boardScore, item) => { if (item.marked) { return boardScore } return boardScore + item.number }, 0) return boardScore * bingoNumber}`

In the `solve`

function we can call the `calculateScore`

function if we found a winning board:

`const isWinner = isRow || isColumnif (isWinner) { score = calculateScore(board, bingoNumber)}`

You can see the final solution for part one on my GitHub repository.

We need to find the last winning bingo board. We can reuse most of the code defined in part one, such as reading input, checking whether a row or column is complete and calculating the score.

Here is how we could break down the differences:

- iterate through all bingo numbers
- mark and skip winning boards

The main difference from part one is that instead of stopping after finding the first winning board, we must go through all bingo numbers until all boards have at least one complete row or column. Thus, `forEach`

works well in this case.

`bingoNumbers.forEach((bingoNumber) => { boards.forEach((board) => { // ... })})`

We must keep track of winning boards to prevent continued markings and secondary winning. We can add a `won`

property to each board.

`boards = boards.map((board) => ({ won: false, board}))`

We must refactor the code to account for this data structure change. In addition, we skip marking numbers on winning boards and set the `won`

property to `true`

whenever we find a new winning board.

`let lastWinningScore = 0bingoNumbers.forEach((bingoNumber) => { boards.forEach((boardObject) => { const { board, won } = boardObject if (won) { return } // ... if (isWinner) { boardObject.won = true lastWinningScore = calculateScore(board, bingoNumber) } })})`

You can see the final solution for part two on my GitHub repository.

We wrote some code that simulates playing bingo. We determined the first and last winning boards, and we got to use a variety of JavaScript array functions, such as `some`

, `forEach`

, `reduce`

, `map`

, etc. As usual, I had fun with this challenge, and I am looking forward to day 5.

All solutions are in this GitHub repository.

The input file consists of binary numbers on separate lines. We must find the so-called gamma rate and epsilon rate, multiply them and return the resulting number in decimal format. Gamma rate is a binary number where each bit represents the most common bit in its position among all given numbers. Similarly, the epsilon rate contains the least common bit for each position.

We can break this problem into smaller tasks:

- read lines from the input file
- find how many bits each number has
- find the most common bit for a given position
- find the least common bit for a given position
- concatenate found bits
- parse the binary string to decimal

I wrote about the `readInput`

function in my Advent of Code: Day 1 blog post. If you haven't already, give it a read!

`import { URL, fileURLToPath } from 'url'import { readInput } from '../utils/readInput.js'const inputPath = fileURLToPath(new URL('./input.txt', import.meta.url))const data = await readInput(inputPath)`

We can then get each line containing a binary number by splitting the input data using the new line character `\n`

`const lines = data.split('\n')`

One of the constraints of this problem is that each number is represented using the same number of bits. Thus, we can calculate the length of the first line to find how many bits each number has.

`const numberOfBits = lines[0].length`

Now for each position, we must find the most and least common bits

`for (let i = 0; i < numberOfBits; i++) { // find most common bit for position i // find least common bit for position i}`

We must iterate through each line and investigate the bit at the given position. To get the bit on a given position of a line we can use `line.charAt(index)`

. We can then count the occurrences of `0`

and `1`

and compare them in the end to determine the most common bit.

`const getMostCommonBit = (lines, index) => { let occurenceOf0 = 0 let occurenceOf1 = 0 lines.forEach((line) => { const bit = line.charAt(index) if (bit === '0') { occurenceOf0 += 1 } else { occurenceOf1 += 1 } }) return occurenceOf0 > occurenceOf1 ? '0' : '1'}`

Since `occurenceOf0 > occurenceOf1`

can be written `occurenceOf0 - occurenceOf1 > 0`

we don't actually need the exact count of occurrences, only the difference. Thus, we can refactor the code like so:

`const getMostCommonBit = (lines, index) => { let occurenceDifference = 0 lines.forEach((line) => { const bit = line.charAt(index) if (bit === '0') { occurenceDifference += 1 } else { occurenceDifference -= 1 } }) return occurenceDifference > 0 ? '0' : '1'}`

After finding the most common bit, we can determine the least common bit as the opposite. For example, if the most common bit is `0`

we know the least common is `1`

and vice versa.

`const mostCommonBit = getMostCommonBit(lines, i)const leastCommonBit = mostCommonBit === '0' ? '1' : '0'`

`let gammaRate = ''let epsilonRate = ''for (let i = 0; i < numberOfBits; i++) { const mostCommonBit = getMostCommonBit(lines, i) gammaRate += mostCommonBit epsilonRate += mostCommonBit === '0' ? '1' : '0'}`

After finding the `gammaRate`

and `epsilonRate`

we can parse them to decimal using the `parseInt`

function.

`parseInt(gammaRate, 2)parseInt(epsilonRate, 2)`

You can see the final solution for part one my GitHub repository

I suggest thoroughly reading and understanding the problem before reading further. To solve the second part of the challenge, we need to repeatedly filter the given binary numbers based on a bit criteria until there a single number is left. We must find the so-called oxygen generator rating (`oxygenRate`

) and the CO2 scrubber rating (`co2Rate`

) using this filtering method.

There are a few steps to solve the challenge:

- find the bit criteria for
`oxygenRate`

and`co2Rate`

- filter numbers until there is a single item left
- parse the binary string to decimal

The bit criteria for `oxygenRate`

is the most common bit in position X among filtered numbers, where X starts at 0 and increases each filtering round. Similarly, the bit criteria for `co2Rate`

is the least common bit.

We can reuse the `getMostCommonBit`

function created for part one, but with changes to accommodate the additional rule - if there is an equal number of bits, the criteria should default to `1`

for `oxygenRate`

and `0`

for `co2Rate`

.

`const getOccurenceDifference = (lines, index) => { let occurenceDifference = 0 lines.forEach((line) => { const bit = line.split('')[index] if (bit === '0') { occurenceDifference += 1 } else { occurenceDifference -= 1 } }) return occurenceDifference}`

The `getOccurenceDifference`

returns a positive number if `0`

is most common, a negative number if `1`

is most common or `0`

if both bits are equally common. Thus, the bit criteria for `oxygenRate`

is defined as follows and represents the most common bit or if both bits are equally common defaults to `1`

.

`getOccurenceDifference(oxygenRates, index) > 0 ? '0' : '1'`

The bit criteria for `co2Rate`

represents the least common bit or if both bits are equally common defaults to `0`

.

`getOccurenceDifference(co2Rates, index) > 0 ? '1' : '0'`

The first filtering round starts with all lines for both `oxygenRates`

and `co2Rates`

. We can copy the lines representing the binary numbers into each corresponding array.

`let oxygenRates = [...lines]let co2Rates = [...lines]`

We repeatedly filter these numbers until there is a single item left. Each time we must increase the index of bit we compare with `criteriaBit`

.

`let index = 0while (oxygenRates.length > 1) { const bitCriteria = getOccurenceDifference(oxygenRates, index) > 0 ? '0' : '1' oxygenRates = oxygenRates.filter((line) => line.charAt(index) === bitCriteria) index++}`

Similarly, we filter the `co2Rates`

.

`index = 0while (co2Rates.length > 1) { const bitCriteria = getOccurenceDifference(co2Rates, index) > 0 ? '1' : '0' co2Rates = co2Rates.filter((line) => line.charAt(index) === bitCriteria) index++}`

Finally, we can parse the first and only element in `oxygenRates`

and `co2Rates`

in the same manner we did for part one.

`parseInt(oxygenRates[0], 2)parseInt(co2Rates[0], 2)`

You can see the final solution for part two my GitHub repository

Day 3 was about thoroughly reading the problem, understanding it and breaking it down into smaller parts. We processed strings representing binary numbers and relied on JavaScript functions like `parseInt(binaryString, 2)`

, `String.charAt(index)`

, `array.filter(condition)`

to solve the challenges.

All solutions are in this GitHub repository.

The goal is to find the horizontal position and depth of a moving submarine based on a set of given instructions.

According to the problem the initial depth and horizontal positions are `0`

.

`let depth = 0let horizontalPosition = 0`

We can read the set of instructions using the `readInput`

utility created on Day 1. Then we can split the data line by line.

`const commands = data.split('\n')`

Each line contains a command type and a number of steps. To use these, we can split the line by the space character, then convert the steps to a number.

`const [type, stepsAsString] = command.split(' ')const steps = Number(stepsAsString)`

Finally, we can calculate the depth and horizontal positions based on the command type and steps to move according to the rules described in the challenge.

` switch (type) { case 'forward': horizontalPosition += steps break case 'down': depth += steps break case 'up': depth -= steps break default: break}`

The final solution:

`import { URL, fileURLToPath } from 'url'import { readInput } from '../utils/readInput.js'const solve = (data) => { const commands = data.split('\n') let depth = 0 let horizontalPosition = 0 commands.forEach((command) => { const [type, stepsAsString] = command.split(' ') const steps = Number(stepsAsString) switch (type) { case 'forward': horizontalPosition += steps break case 'down': depth += steps break case 'up': depth -= steps break default: break } }) return depth * horizontalPosition}const inputPath = fileURLToPath(new URL('./input.txt', import.meta.url))const data = await readInput(inputPath)console.log(solve(data))`

The second part of the problem requires calculating depth and horizontal position in a slightly different way using new rules and an aim value.

`let aim = 0`

The new rules can be written in JavaScript as follows:

`switch (type) { case 'forward': horizontalPosition += steps depth += aim * steps break case 'down': aim += steps break case 'up': aim -= steps break default: break}`

The final solution for part two is on my GitHub repository.

Day 2 was about calculating values such as horizontal position and depth of a moving submarine based on a set of given commands. I had fun solving the challenge and the utility functions added on Day 1 helped solve the second challenge somewhat faster.

]]>This blog series will walk you through my thought process and show possible solutions to the challenges. I used JavaScript to solve these, but you can use any programming language.

All solutions are in this GitHub repository.

To use JavaScript, we need to set a Node.js project. I opted for Yarn as a package manager due to improved performance when installing dependencies and personal preference, but `npm`

works well too.

Steps:

- Initialize the project by running
`yarn init`

. This command creates the`package.json`

file containing project name, description, licencing and dependencies. - Add
`"type": "module"`

to the`package.json`

file to enable ESModules since Node.js uses CommonJS modules by default. - Install Prettier to format code automatically. Run
`yarn add prettier`

and add the following configuration in`.prettierrc`

:`trailingComma: "es5"tabWidth: 2semi: falsesingleQuote: trueendOfLine: "lf"`

- Add
`.gitignore`

file for the Node.js project. This template from GitHub works well.

The solution and input files are grouped by day in the `src`

folder (e.g. `src/day-01-sonar-sweep`

).

`advent-of-code-2021 .gitignore .prettierrc package.json yarn.locknode_modulessrc day-01-sonar-sweep input.txt one.js two.js`

Reading input data from a file is a common need when solving any of the challenges. Thus, I created a utility function that accepts a file path as a parameter and returns a `Promise`

with the data as a string.

`// src/util/readInput.jsimport fs from 'fs'export const readInput = (inputPath) => { return new Promise((resolve, reject) => fs.readFile(inputPath, 'utf8', (err, data) => { if (err) { console.error(err) reject(err) } resolve(data) }) )}`

We can then import and use the `readInput`

utility in each solution file (e.g. `one.js`

):

`import { readInput } from '../utils/readInput.js'const data = await readInput(inputPath)`

The input path is relative to the solution file. We can use the `url`

module from Node.js to build the path needed for the file reader.

`import { URL, fileURLToPath } from 'url'const inputPath = fileURLToPath(new URL('./input.txt', import.meta.url))`

You can read the problem on Advent of Code Day 1.Given an array of depths, the objective is to compare every two adjacent depths and count when the depth increases.

To solve this problem, we first need to parse the contents of the input file. Since each number is given on a new line we can use the `split`

function with the new line character `\n`

as a separator to get an array of strings representing the contents of each line. Then we convert each line to a number.

`const depths = data.split('\n').map((line) => Number(line))`

Since the first depth does not increase or decrease, we can remove it from the array and store it with the purpose of comparing it to the next depth.

`let previousDepth = depths.shift()`

The depth increases when `depth - previousDepth > 0`

is true.We can iterate the array and count how many times the depth increases using `forEach`

and a counter variable.

`let increaseCount = 0depths.forEach((depth) => { if (depth - previousDepth > 0) { increaseCount += 1 } previousDepth = depth})`

Put together, the solution is:

`import { URL, fileURLToPath } from 'url'import { readInput } from '../utils/readInput.js'const getIncreaseCount = (data) => { const depths = data.split('\n').map((i) => Number(i)) let previousDepth = depths.shift() let increaseCount = 0 depths.forEach((depth) => { if (depth - previousDepth > 0) { increaseCount += 1 } previousDepth = depth }) return increaseCount}const inputPath = fileURLToPath(new URL('./input.txt', import.meta.url))const data = await readInput(inputPath)console.log(getIncreaseCount(data))`

The second part has the objective of comparing sums of three depths to the next set of three depths. For example, given the depths a, b, c, d, e. We would have to compare:

- a + b + c and b + c + d
- b + c + d and c + d + e

Since we subtract the sums to find whether the depth increases or decreases, the problem can be reduced to compare

- a and d
- b and e

In other words, we need to compare every two depths positioned three indexes apart.

`let increaseCount = 0for(let i = 0; i < depths.length - 2; i++) { if (depths[i + 3] - depths[i] > 0) { increaseCount += 1 }}`

You can see the final solution on my GitHub repository.

Day 1 consisted of setting up a Node.js project with Yarn and Prettier, creating an input reader utility, parsing numbers from an input file, iterating through a number array and comparing every two numbers positioned either one or three indexes apart.

]]>This short post shows you how you can create an empty commit.

I needed to create an empty commit when using GitVersion. Let me explain.GitVersion finds commit messages containing patterns like `+semver:minor`

and determines the corresponding build and release number used in the continuous deployment process. While the best practice is adding this text on the commit with the minor or major change, there are cases where the change is broken up into multiple smaller commits. I personally experienced such a scenario or forgot to add the correct message bumping the version.

So I tried running this:

`git commit -m "+semver:minor"`

Git, however, did not let me commit, proudly declaring my branch was up to date and there was nothing to commit.

`--allow-empty`

optionGit assumes that creating an empty commit might be a mistake, thus preventing it by default. You can bypass this using the `--allow-empty`

option.

`git commit -m "+semver:minor" --allow-empty`

With this approach, you can create an empty commit containing no files or any changes, only with your desired message.

`--allow-empty-message`

optionWhen researching to write this post, I stumbled upon the `--allow-empty-message`

option. Similarly to the `--allow-empty`

option, it is not meant for typical workflow. Omitting to write a descriptive short message about what has changed is not good practice and makes it harder to scan through the changes when looking at the commit history.

However, an empty commit with no message might be useful for triggering a Continuous Deployment & Integration (CD/CI) pipeline.

`git commit --allow-empty-message --no-edit`

This snippet bypasses Git's requirement of adding a message. The `--no-edit`

option keeps the selected empty message and prevents launching an editor, which by default prompts modifying empty commit messages.

In rare cases, you might need to bypass Git's default requirements when making a commit. The `--allow-empty`

and `--allow-empty-message`

options can be useful to create empty commits to trigger some effect. The examples mentioned in this post were triggering a CI/CD pipeline or increasing the semantic version of your package or application.

Data Science skills are among the most sought after in the modern tech industry. Successful business collect and analyse large amounts of data to effectively make decisions, innovate and create an excellent service for their customers.

This article is the first in the **Road to Data Science** series and fittingly introduces the benefits of learning Data Science from a Software Engineer's perspective. While the points I make in this article reflect my expectations and reasons for learning, I hope it will motivate you to explore this marvellous field.

At the beginning of my learning journey, I had no clear idea of what Data Science was. Fortunately, Denise Sengl's brief introduction clarified some of my initial assumptions, captivated my interest and sparked my curiosity about the intricate world of Data Science.

I encourage you to watch the first 20 minutes of What the Tech... is a Data Scientist?. In this video, Denise reveals how "Data Science is about finding patterns in data for a specific purpose".

Rather than seeing Data Science as a completely different discipline, think of it as part of the problem-solving process. This new perspective of Data Science means that learning data analysis techniques and shifting your mindset to think like a Data Scientist becomes a valuable tool for solving problems.

I firmly believe that learning, seeking to improve and constantly building new skills are healthy habits of a successful Software Engineer. Having Data Science skills can broaden your perspective and improve decision-making.

Data Scientists who are also Software Engineers are somewhat rare and have a combination of IT, statistics, and communication skills. Having a broad set of skills can advance your career and give you many opportunities in the future.

The diagram below is taken from Denise Sengl's presentation and illustrates the variety of job titles related to Data Scientist based on the skillset required on the job. Artificial Intelligence Engineer and Machine Learning Engineer use IT skills and statistics. Having IT skills, communication and domain expertise in a business are suitable for Data Engineer and Data Warehouse Architect positions. Finally, Data Analysts and Business Intelligence Analysts know statistics, have domain expertise and strong communication skills.

Investing time in learning about Artificial Intelligence (AI) and Machine Learning (ML) techniques can empower you to build the innovative, smart products of the future.

Software products are expected to become increasingly smarter by anticipating user's needs and making predictions. For example, mainstream content streaming applications like Netflix, Youtube and Spotify collect and analyse user's data to recommend personalised content, thus, creating a better user experience.

By learning how to integrate features such as image processing, natural language processing, computer vision, and many more into modern products, you can create powerful applications. Examples range across industries from self-driving cars, legal analysis, fraud prevention to assistant software that can help doctors perform diagnosis faster.

Many of these applications are empowered by cloud computing services like Google Cloud, AWS and Microsoft Azure. Thus, by learning the basics of AI and ML, you can get started with some of these services that simplify the integration of AI-based features and build the intelligent software products of tomorrow.

The ability to engineer and analyse large amounts of ever-increasing quantities of data is essential in today's organisations. The Big Data field emerged when NASA scientists had to solve the challenge of collecting, processing and storing an unimaginably large amount of data from space missions, Earth observatory, etc.

While not everyone works at NASA, having the skills to manage large datasets, make data visualisations and finding patterns help organisations gain insight and make informed decisions to grow their business.

Learning Data Science

- helps you solve business problems by using insights from data
- gives you a wider range of career opportunities
- empowers you to build AI-based software
- enables you to work with Big Data

If you have a single project, you don't need a separate library for your components as you can define them directly in your project.

This post shows how to set up and get started using

- Create React App for installing React and Testing Library
- Storybook for documenting and showcasing components in isolation
- Rollup for bundling the library

You can find the resulting project on GitHub.

To get started, we can create a new project called `ui`

using Create React App:

`npx create-react-app ui --template typescript`

This generates boilerplate code that allows us to create React components and test them using React Testing Library.

You can remove `start`

, `build`

and `eject`

scripts from `package.json`

because we will use Storybook to start our project and Rollup to bundle a production-ready package.

`"scripts": { "test": "react-scripts test"},`

You can also remove `App.tsx`

, `App.css`

, `App.test.tsx`

, `index.css`

, `index.tsx`

, `logo.svg`

, `reportWebVitals.ts`

since we won't use them.

Storybook is a wonderful tool for developing, showcasing and documenting UI components in isolation. Storybook works with any component-based library in JavaScript such as React, Vue, Angular, and more.

To install Storybook in our React application, run this command:

`npx sb init`

You should now be able to run Storybook locally by running `npm run storybook`

or if you prefer `yarn storybook.`

Here is a preview of the Storybook application:

`stories`

folderBy default, the `npx sb init`

command creates a `stories`

folder inside `src`

with example components and documentation pages. While you can safely remove this folder, I recommend exploring it first!

`Introduction.stories.mdx`

file contains the documentation used to generate the*Introduction to Storybook*page in the image preview above. The file is written using MDX format, which is a combination of Markdown and JSX, so you can write components directly into the documentation. The links in this file are also worth exploring!`Button.tsx`

and`Button.stories.tsx`

are great examples of how you can define a component and a corresponding page in Storybook. The Button story documents available props, code usage, and showcases component variations. Through the Controls, you can experiment with toggling and customizing props.

`.storybook`

folderContains files for customizing Storybook:

`main.js`

defines the file pattern used by Storybook to determine what to include in the showcase application. By default, Storybook uses files containing`.stories`

in their name.

Addons for the Storybook application are also defined in`"stories": [ "../src/**/*.stories.mdx", "../src/**/*.stories.@(js|jsx|ts|tsx)" ]`

`main.js.`

`"addons": [ "@storybook/addon-links", "@storybook/addon-essentials", "@storybook/preset-create-react-app" ]`

`preview.js`

configures how actions and controls will show up depending on the prop's name. By default, props starting with`on`

such as`onClick`

,`onChange`

,`onSubmit`

are automatically interpreted by Storybook as actions, so when triggered, they get logged inside Storybook's Actions addon. Besides, props suffixed with`background`

and`color`

will show a color picker control, whereas props suffixed with`Date`

display a date picker control.`export const parameters = { actions: { argTypesRegex: "^on[A-Z].*" }, controls: { matchers: { color: /(background|color)$/i, date: /Date$/, }, },}`

`package.json`

fileThis file is automatically updated with Storybook development dependencies, custom eslint configuration for stories, and the following scripts:

` "scripts": { "test": "react-scripts test", "storybook": "start-storybook -p 6006 -s public", "build-storybook": "build-storybook -s public" },`

`npm run storybook`

starts the Storybook application locally`npm run build-storybook`

builds the Storybook application ready for deployment

Rollup needs an entry point to generate the bundle. Let's create `index.ts`

in the `src`

file that exports each of our components generated by Storybook:

`export * from "./stories/Button"export * from "./stories/Header"export * from "./stories/Page"`

Rollup is a good bundling tool choice if we want to package the React component library and reuse it in other projects.

In Webpack and Rollup: the same but different Rich Harris recommends using Webpack for applications and Rollup for libraries.

First, you need to install dependencies either using npm

`npm install @rollup/plugin-commonjs @rollup/plugin-node-resolve rollup-plugin-peer-deps-external @rollup/plugin-typescript postcss rollup-plugin-postcss --save-dev`

or with yarn

`yarn add rollup @rollup/plugin-commonjs @rollup/plugin-node-resolve rollup-plugin-peer-deps-external @rollup/plugin-typescript postcss rollup-plugin-postcss -D`

Let's understand these dependencies:

`rollup`

gives the command-line interface (CLI) to bundle the library`@rollup/plugin-commonjs`

converts CommonJS modules (potentially used in node_modules) to ES6, which is what Rollup understands. If you don't know what CommonJS modules are, this article explains module systems in JavaScript: What the heck are CJS AMD UMD and ESM in JavaScript?`@rollup/plugin-node-resolve`

helps Rollup understand how to import dependencies in`node_modules`

.`@rollup/plugin-typescript`

transpiles TypeScript files to JavaScript.`rollup-plugin-peer-deps-external`

prevents adding peer dependencies to the bundle because the consumer of the library is expected to have them. So we also get a smaller bundle size.`rollup-plugin-postcss`

integrates with PostCss tool for bundling styles. You can configure this tool with CSS modules, Less, Sass, depending on your preference.

Next, we need to update `package.json`

.According to the Rollup Wiki, libraries should be distributed using CommonJS and ES6. We specify the output file paths using `main`

and `module`

properties. We also use these properties in the Rollup configuration file.

Then, add a `build`

script that uses the `rollup`

command-line interface with the `-c`

argument. This means that Rollup will look for a configuration file named `rollup.config.js`

to bundle the component library.

Add `react`

and `react-dom`

to `peerDependencies`

and `devDependencies`

(we still need them for Storybook) and remove these from `dependencies`

. This is needed by `rollup-plugin-peer-deps-external`

plugin to prevent distributing and bundling dependencies that the consumer library already has.

`{ ... "main": "./build/index.js", "module": "./build/index.es.js", "scripts": { ... "build": "rollup -c" }, "peerDependencies": { "react": "^17.0.2", "react-dom": "^17.0.2" }, "devDependencies": { ... "react": "^17.0.2", "react-dom": "^17.0.2" }}`

Finally, the `rollup.config.js`

file has the following contents:

`import commonjs from "@rollup/plugin-commonjs";import resolve from "@rollup/plugin-node-resolve";import peerDepsExternal from "rollup-plugin-peer-deps-external";import typescript from "@rollup/plugin-typescript";import postcss from "rollup-plugin-postcss";import packageJson from "./package.json";// eslint-disable-next-line import/no-anonymous-default-exportexport default { input: "./src/index.ts", output: [ { file: packageJson.main, format: "cjs", sourcemap: true }, { file: packageJson.module, format: "esm", sourcemap: true } ], plugins: [ peerDepsExternal(), resolve(), commonjs(), typescript(), postcss() ]};`

We created a React component library powered by Create React App. We installed Storybook to create a documentation page and enable developing components in isolation. Finally, we configured Rollup for bundled the library.

While this is one way of getting started, an alternative approach could be based on Create React Library. Let me know if you would like to see an article about that.

I hope you found this interesting.Thanks for reading!

]]>You can control the title, description, preview image, and other custom properties that show up when someone pastes a link to your website on Twitter, Facebook, as part of a Slack message, or on other social media. This is called ** link unfurling **, and the previews are called

In this article, we explore how to build a web page that shows image previews, a title, and a short description. Here is an example of the website preview on Facebook:

Facebook and the majority of social media platforms such as LinkedIn rely on Open Graph HTML markup to show website previews and other media previews.

`og:type`

defines the type of object or media you want to share and it can be one of following:`article`

,`book`

,`music`

,`profile`

,`video`

,`website`

If you don't specify this property, it will default to`website`

.`og:url`

is the Canonical URL of your website. This means the preferred or base URL of a webpage typically used by search engines.`og:title`

is the title of the page you are linking. Unless you are linking the home page this is not your brand or website name.`og:description`

is a short description of the page.`og:image`

is the URL of the preview image of your web page.

For example, the following snippet included in the `head`

section of a HTML page of https://fearlessintotech.com website was used to generate the preview above.

`<meta property="og:type" content="website" /><meta property="og:url" content="https://fearlessintotech.com" /><meta property="og:title" content="Fearless into Tech" /><meta property="og:description" content="A community initiative from women for women. Fearless into Tech is a decentralised, peer-to-peer supported group of tech-curious, self-learning women." /><meta property="og:image" content="https://fearlessintotech.com/images/preview.jpg" />`

Twitter developed its own mechanism called Twitter Cards for optimizing links in tweets and enabling rich content sharing. There is some overlap between Open Graph and Twitter Cards. For example, `twitter:title`

, `twitter:description`

, `twitter:url`

and `twitter:image`

have the same function as their Open Graph counterparts. In addition, there are some Twitter-specific tags:

`twitter:card`

similar to`og:type`

specifies which card type should be used when sharing the web page link. There are four types of cards:*Summary Card*containing a title, description and thumbnail;*Summary Card with Large Image*same as*Summary Card*, but instead of a thumbnail features a large image;*App Card*for links to download mobile apps;*Player Card*for showing video, audio, and other media.`twitter:site`

specifies the Twitter account associated with the website`twitter:creator`

specifies the Twitter account associated with the author of the content on the website. This might be more relevant to articles posted on blog platforms, where the site and creator are distinct.

`<meta name="twitter:card" content="summary_large_image"><meta name="twitter:site" content="@FearlessInto"><meta name="twitter:domain" value="fearlessintotech.com" /><meta name="twitter:title" value="Fearless into Tech" /><meta name="twitter:description" value="A community initiative from women for women. Fearless into Tech is a decentralised, peer-to-peer supported group of tech-curious, self-learning women." /><meta name="twitter:image" content="https://fearlessintotech.com/images/preview.jpg" /><meta name="twitter:url" value="https://fearlessintotech.com" />`

Slack uses a combination of Open Graph and Twitter tags to unfurl links as shown in Slack API Documentation. The website preview shows up like this when shared in a Slack message:

Once you include all the tags for various social media platforms, you can test how your website link would be unfurled by entering the link to your website in the following pages:

- https://cards-dev.twitter.com/validator
- https://developers.facebook.com/tools/debug/

To sum up, including meta tags on your web pages using the Open Graph coupled with Twitter Cards is a powerful mechanism that allows various social media platforms to perform link unfurling and show previews when someone shares a link to your website.

]]>A great example of custom-styled images is on most social media applications such as Facebook, Twitter, and Linkedin, where profile pictures are displayed in a circle.

This article shows you how you can create a similar effect to display a black and white image with rounded corners.

This is the markup for the image in HTML:

` <img src="profile.jpg" width="200" height="200" alt="Profile picture" class="rounded grayscale" />`

The `img`

tag is styled using two CSS utility classes `rounded`

and `grayscale`

.

The `rounded`

class turns the square image into a circle by using the `border-radius`

property and setting it to `50%`

of the image's width.

`.rounded { border-radius: 50%;}`

Sara Cope explores the wonderful `border-radius`

property in depth on CSS Tricks.

The `grayscale`

class creates the black and white effect. You need to use the filter property with the grayscale option and specify `100%`

which completely saturates the color. According to MDN Web Docs most modern browsers support filter property with the `-webkit-`

vendor prefix.

`.grayscale { filter: grayscale(100%); -webkit-filter: grayscale(100%);}`

To sum up, you can use the `border-radius`

and `filter`

with `grayscale`

properties to stylize an image with rounded corners and a black and white appearance. This is especially useful when you don't have control over the image content.Thanks for reading and I hope you found this interesting!

There might be cases when you want to calculate and display the color contrast ratio. For example, if you are developing a component library or design system and you want to emphasize color usage guidelines with accessibility in mind.

This article shows how you can calculate the color contrast ratio in TypeScript given two colors in either RGB or hex format.

First, let's explore the concept of color contrast ratio visually. If you inspect the color of a text element in Chrome Developer Tools you can see the contrast ratio for that color and whether the contrast is high enough for the text to be considered legible. In the screenshot, color `#2db477`

against white has a contrast ratio of 2.66 which is not high enough. The Developer tools also recommend two alternative colors that would meet the high contrast color requirements where the ratios are 4.5.1 and 7.0:1 respectively.

What does this AA: 4.5 and AAA: 7.0 mean?

The Web Content Accessibility Guidelines (WCAG) is a standard that defines three levels of accessibility that should be implemented in the following priority:

- A: This level is required for assistive technology to read and understand a webpage.
- AA: This is the ideal support and it is required for government and public websites. Normal text should have a color contrast of at least 4.5 to meet this level. Large text, such as headings, as well as icons, should have a color contrast of at least 3.0.
- AAA: This level offers specialized support for specific audiences. The text should have a color contrast of at least 7.0.

The WCAG defines a procedure which we can use to calculate color contrast. There are many contrast checkers online such as the Chrome Developer Tools, WebAim, and many others that use the procedure described in the WCAG specification.

According to the procedure we first need to measure the luminance of the foreground color and background color. Then calculate the contrast ratio using this formula `(L1 + 0.05) / (L2 + 0.05)`

, where L1 represents the luminance of the lighter color and L2 represents the luminance of the darker color.

`function contrast(foregroundColor: RGB, backgroundColor: RGB) { const foregroundLumiance = luminance(foregroundColor); const backgroundLuminance = luminance(backgroundColor); return backgroundLuminance < foregroundLumiance ? ((backgroundLuminance + 0.05) / (foregroundLumiance + 0.05)) : ((foregroundLumiance + 0.05) / (backgroundLuminance + 0.05));};`

You can define the `RGB`

type as an array of three numbers. We will use this type when calculating contrast.

`type RGB = [number, number, number];`

The relative luminance of each color part (red, green and blue) can be calculated using this function:

`function luminance(rgb: RGB) { const [r, g, b] = rgb.map((v) => { v /= 255; return v <= 0.03928 ? v / 12.92 : Math.pow((v + 0.055) / 1.055, 2.4); }); return r * 0.2126 + g * 0.7152 + b * 0.0722;};`

If we use hex color format it will be useful to get the RGB value. We can achieve that using the following function:

`function getRgbColorFromHex(hex: string) { hex = hex.slice(1); const value = parseInt(hex, 16); const r = (value >> 16) & 255; const g = (value >> 8) & 255; const b = value & 255; return [r, g, b] as RGB;};`

Finally, we can use our functions to calculate the contrast between two colors (e.g. `#2db477`

and `#ffffff`

):

`const foregroundColor = getRgbColorFromHex("#2db477");const backgroundColor = getRgbColorFromHex("#ffffff");const contrastRatio = contrast(foregroundColor, backgroundColor);`

Alternatively, if you prefer using the RGB type directly:

`const contrastRatio = contrast([45, 180, 119], [255, 255, 255]);`

To sum up, we looked at how the Web Content Accessibility Guidelines (WCAG) defines three levels of accessibility and what each level requires in terms of color contrast. We then defined functions in TypeScript that calculate color contrast using the formulas in the WCAG specification, given colors in either hex or RGB format.

]]>