Bitwise And of Range

Recently I had to solve a problem which asked you to determine the bitwise and of a range of nonnegative numbers. There is an obvious linear solution to this problem which simply computes the bitwise and of the range.

Bitwise and of [4, 10] = 4 & 5 & 6 & 7 & 8 & 9 & 10

However, after thinking about how the anding ends up “erasing” bits permanently I figured out the following logarithmic solution:

def bitwise_and_of_range(begin, end):
    if begin == end:
        return begin
    else:
        return bitwise_and_of_range(begin >> 1, end >> 1) << 1

Essentially, if you have at least two numbers in the range, the last bit will be zero, so you can compute the bitwise and of the prefixes and append a zero to the result (by shifting it to the left).

Sliding Maximum

In this post I will present a simple and somewhat generic solution for the sliding maximum problem.

Find the maximum element for every K consecutive elements.

Note that the sliding minimum can be seen as a negated sliding maximum problem, just like the maximum spanning tree can be seen as the minimum spanning tree of the negated graph.

Implementation

Below is a generic C++ sliding window I implemented that takes a comparator as a template parameter. This allows it to be instantiated for both the sliding maximum and the sliding minimum.

template <typename T, typename Comparator>
class SlidingWindow {
  struct Block {
    Block(T v, size_t w) : value(v), width(w) {}
    T value;
    size_t width;
  };
  Comparator comp;
  deque<Block> data;

 public:
  void push(const T t) {
    size_t width = 1;
    while (!data.empty() && comp(data.back().value, t)) {
      data.pop_back();
      width++;
    }
    data.emplace_back(t, width);
  }
  T get() const { return data.front().value; }
  void pop() {
    // Either reduce the width of the best block (front), or drop it.
    if (data.empty()) {
      return;
    }
    if (data.front().width > 1) {
      data.front().width--;
    } else {
      data.pop_front();
    }
  }
};

This solution is amortized O(1) for all operations, making it O(N) for N elements. By using STL trees we cannot do better than O(N log N).

Sliding Maximum Example

Using 20 terms and a window of width 4, we have the following table:

Current Maximum Minimum
16 16 16
1 16 1
2 16 1
13 16 1
9 13 1
11 13 2
15 15 9
14 15 9
4 14 4
8 8 4
3 8 3
7 7 3
6 7 3
12 12 6
5 12 5
18 18 5
19 19 5
17 19 17
20 20 17
10 20 10

BigWord

This post is an update on a small side project I just started. It is called BigWord and it aims to find the biggest word in a dictionary based on a multiset of letters.

Sample Usage

For instance, it finds out that the biggest English words made with the letters from “linuxwars” are “urinals” and “insular” and that the biggest English words made with “applepies” are “pappies” and “applies”.

$ ./bigword linuxwars
> urinals
> insular
$ ./bigword applepies
> pappies
> applies

Implementation Notes

The existing code is written using C++14 and should run under any system which has reasonable C++14 compiler. The program relies on the public domain word list shipped with Fedora, but you may use any plain text dictionary you want.

It loads all words which may be constructed with the provided letter multiset, ignoring words which are too big in order to improve performance, and then traverses the word vector from longest to shortest trying to match as many as possible of the biggest subsets of the provided multiset.

Even though it is performant enough for a fairly big word list, I think there must be a more efficient way to find the biggest subsets. Currently, the first abovementioned example finishes after 200 ms on a list of 479,000 words.

Ideally, I would like even the worst-case queries to finish in less than 100 ms for word lists with less than one million words, as suggested in Usability Engineering by Jakob Nielsen.

Some possible improvements are dumping the processed word vector after the first execution and reusing it in the next executions and sorting the input text file by word length so IO can be stopped earlier, but these possibilities add undesired complexity to the setup the user would need.

The following code snippet shows how letter counts are currently compared.

  static bool is_contained(const LetterCount &a, const LetterCount &b) {
    if (a.letter_count > b.letter_count) {
      return false;
    }
    size_t remaining = b.letter_count - a.letter_count;
    for (size_t i = 0; i < alphabet_size; i++) {
      if (a.counters[i] > b.counters[i]) {
        return false;
      }
      // By catching excessive differences we may fail early.
      const size_t difference = b.counters[i] - a.counters[i];
      if (difference > remaining) {
        return false;
      }
      remaining -= difference;
    }
    return true;
  }

There may be a faster way to do it, but I am failing to spot it.

You can find the project source code (licensed under the ISC license) at the GitHub repository.

Heap’s Algorithm

In this post, I describe an alternative to the Narayana Pandita’s algorithm, an algorithm used to generate permutations of a set of elements which I have already written about.

Comparison with the alternative

This is one of the most efficient ways to generate the permutations of a list. It is more efficient than the Narayana Pandita’s algorithm if both are properly implemented.

This algorithm does not generate permutations in lexicographic order, which is required in some scenarios.

This algorithm does not require any comparison between elements of the list it is sorting, making it more appealing for sequences without defined order or whose elements are difficult to compare.

Python implementations

Based on implementations found on the Internet, I wrote two PEP 8 compliant Python 3 versions of Heap’s algorithm. The first one, which is also the most elegant one in my opinion, is a recursive implementation. As one may assume, the cost of the several levels of recursion make it substantially more expensive than it needs to be.

Recursive implementation

def swap(elements, i, j):
    elements[i], elements[j] = elements[j], elements[i]


def generate_permutations(elements, n):
    if n == 0:
        yield elements
    else:
        for i in range(n - 1):
            # Generate permutations with the last element fixed.
            yield from generate_permutations(elements, n - 1)
            # Swap the last element.
            if i % 2 == 0:
                swap(elements, i, n - 1)
            else:
                swap(elements, 0, n - 1)
        # Generate the last permutations after the final swap.
        yield from generate_permutations(elements, n - 1)


def permutations(elements):
    yield from generate_permutations(elements, len(elements))

This code uses the yield from syntax, added in Python 3.3. You can read more about it here.

The algorithm is not trivially understood. Essentially, what is happening is a locking of the rightmost element and the recursive permutation of all other elements, then an intelligently chosen swap involving the rightmost element and the repetition of the process until all elements have been in the rightmost position.

Non-recursive implementation

def swap(elements, i, j):
    elements[i], elements[j] = elements[j], elements[i]


def generate_permutations(elements, n):
    # As by Robert Sedgewick in Permutation Generation Methods
    c = [0] * n
    yield elements
    i = 0
    while i < n:
        if c[i] < i:
            if i % 2 == 0:
                swap(elements, 0, i)
            else:
                swap(elements, c[i], i)
            yield elements
            c[i] += 1
            i = 0
        else:
            c[i] = 0
            i += 1


def permutations(elements):
    return generate_permutations(elements, len(elements))

Benchmarking

I have benchmarked the algorithm for lexicographic permutations and both implementations above using a sequence of 10 integers and a sequence of 10 strings of 100 characters which only differed by the last character. Note that although an input of length 10 may seem small for an unaware reader, there are 10! = 3,628,800 results generated even for such small input.

The benchmarking results follow.

Lexicographic with 10 integers took 10.99 seconds.
Recursive Heap's with 10 integers took 14.10 seconds.
Non-recursive Heap's with 10 integers took 4.29 seconds.

Lexicographic with 10 strings took 11.66 seconds.
Recursive Heap's with 10 strings took 14.43 seconds.
Non-recursive Heap's with 10 strings took 4.31 seconds.

As you can see, the cost of recursion makes the more elegant implementation of Heap’s algorithm even slower than the lexicographic permutation generator. However, the non-recursive implementation is substantially more efficient. It is also visible from these numbers that the performance hit from the use of big strings (which are more expensive to compare than small integers) was bigger on the lexicographic permutation generator than on the generators that do not compare elements.

Balancing Chemical Equations

On this post, I describe an efficient way to balance chemical equations using linear algebra.

Procedure

From any chemical equation, we can derive a vector equation that describes the number of atoms of each element present in the reaction. Row reducing this vector matrix leads to the general solution, from which we can easily derive a solution with integer coefficients.

Example

Find the smallest integer coefficients that balance the following chemical equation.

MnS

  • As2Cr10O35
  • H2SO4 → HMnO4
  • AsH3
  • CrS3O12
  • H2O

Solution

From the chemical equation above, we can derive the following matrix. Notice that the first row is for Mn, the second is for S, and so on. However, how you choose to order does not matter, as long as all columns respect the order.

1    0    0   -1    0    0    0
1    0    1    0    0   -3    0
0    2    0    0   -1    0    0
0   10    0    0    0   -1    0
0   35    4   -4    0  -12   -1
0    0    2   -1   -3    0   -2

The last four columns all have negative atom counts because they have been subtracted from both sides of the original vector equation.

After row reducing the matrix with Octave, we get

1.00000   0.00000   0.00000   0.00000   0.00000   0.00000  -0.04893
0.00000   1.00000   0.00000   0.00000   0.00000   0.00000  -0.03976
0.00000   0.00000   1.00000   0.00000   0.00000   0.00000  -1.14373
0.00000   0.00000   0.00000   1.00000   0.00000   0.00000  -0.04893
0.00000   0.00000   0.00000   0.00000   1.00000   0.00000  -0.07951
0.00000   0.00000   0.00000   0.00000   0.00000   1.00000  -0.39755

However, this is not of much help as we want to end up with integer coefficients. Therefore, we change the number display format to rational by using format rat. This makes the last column six rationals with the same denominator.

 -16/327
 -13/327
-374/327
 -16/327
 -26/327
-130/327

After multiplying the last column by 327 and changing its sign, we get the coefficients for the balanced equation.

 16
 13
374
 16
 26
130

Note that the free variable is the coefficient of H2O, which is 327 as we are interested on the smallest integer that makes all other coefficients integers.

Wrapping up, the balanced equation has coefficients 16, 13, 374, 16, 26, 130, 327, in this order. Not something easy to find by trial and error.