# Mathematics

The Motzkin numbers $M_n$ are defined by

$M_n := \sum_{k \geq 0} \binom {n}{2k} C_{k}$

where $C_{k}\,$ are the Catalan numbers. The following result is established in [1]

Theorem 1 (Theorem 5.4 of [1]). The Motzkin number $M_n$ is divisible by $5$ if and only if $n$ is one of the following forms

$(\, 5i + 1 )\, 5^{2j} - 2,\, (\, 5i + 2 )\, 5^{2j-1} - 1,\, (\, 5i + 3 )\, 5^{2j-1} - 2,\, (\, 5i + 4 )\, 5^{2j} - 1$

where $i, j \in \mathbb{N}$ and $j \geq 1$.

Theorem 2: The asymptotic density of numbers of the first form in theorem 1 is $\frac {1}{120}$. Numbers of the fourth form also have asymptotic density $\frac {1}{120}$. The asymptotic density of numbers of the second and third forms in theorem 1 is $\frac {1}{24}$ each.

Corollary 3: The asymptotic density of $\#\{n < N: M_{n} \equiv 0 \mod {5} \}$ is $\frac {1}{10} \,$.

Remark 4: Corollary 3 follows immediately from theorems 1 and 2 and the disjointness of the 4 forms of integers listed in theorem 1.

Remark 5: Numerical tests also show that roughly 22.5% of Motzkin numbers are congruent to each of $1, 2, 3, 4 \mod {5}$.

Proof of Theorem 2.

Firstly consider numbers of the form $(\, 5i + 1)\, 5^{2j} - 2\,$. As we are interested in asymptotic density it is enough to look at numbers of the form $(\, 5i + 1)\, 5^{2j}\,$. We have, for fixed $j \geq 1$,

$\#\{ i \geq 0: (\, 5i + 1)\, 5^{2j} \leq N \} = \frac {N}{5^{2j+1}} - \frac {1}{5} - E(\, j, N )\,$

where $0 \leq E( \, j, N ) < 1$ is an error term introduced by not rounding down to the nearest integer. So,

$\#\{n < N: n = (\, 5i + 1)\, 5^{2j} \, for some \, i, j \in \mathbb{N} \, with \, j \geq 1 \}$

$= \sum_{j \geq 1} (\, \frac {N}{5^{2j+1}} - \frac {1}{5} - E(\, j, N )\, )\, = \sum_{j = 1}^{\lfloor \frac{1}{2} (\, \log_{5} (\, N )\,- 1 )\, \rfloor} (\, \frac {N}{5^{2j+1}} - \frac {1}{5} - E(\, j, N )\, )\,$

$= \frac {N}{125} \sum_{j = 0}^{\lfloor \frac{1}{2} (\, \log_{5} (\, N )\,- 1 )\, \rfloor - 1} (\, \frac {1}{25} )\,^{j} - E^{'}(\, N )\,$

where the new error term $E^{'} (\, N )\,$ satisfies $0 \leq E^{'} (\, N )\, < \frac{3}{5} (\, \log_{5} (\, N )\,$. Then

$\#\{n < N: n = (\, 5i + 1)\, 5^{2j} \, for some \, i, j \in \mathbb{N} \, with \, j \geq 1 \}$

$= (\, \frac {N}{125} )\, \frac {1 - (\, \frac {1}{25} )\,^{\lfloor \frac{1}{2} (\, \log_{5} (\, N )\,- 1 )\, \rfloor} }{1 - \frac {1}{25} } - E^{'}(\, N )\,$.

Since $\lim_{N \to \infty} \frac {E^{'}(\, N )\, }{N} = 0$

$\lim_{N \to \infty} \frac {1}{N} \#\{n < N: n = (\, 5i + 1)\, 5^{2j} \, for some \, i, j \in \mathbb{N} \, with \, j \geq 1 \} = \frac {1}{120}$.

A similar argument can be used to establish the asymptotic densities of the other 3 forms of numbers in the theorem.

$\Box$.

## References

[1] Deutsch, E., & Sagan, B. E. (2006). Congruences for Catalan and Motzkin numbers and related sequences. Journal of Number Theory, 117(1), 191–215. http://doi.org/10.1016/j.jnt.2005.06.005

The Catalan numbers are defined by

$C_n = \frac{1}{n + 1} \binom {2n} {n}.$

Corollary 2 of the previous post established that

$\#\{n < 2^t: C_{n} \equiv 0 \mod {2^k} \} = \sum_{i = k+1}^{i = t} \binom{t} {i}.$

This result can be used to show that the asymptotic density of the set $\#\{n < N: C_{n} \equiv 0 \mod {2^k} \}$ is $1.$

To this end let $N \in \mathbb{N}: 2^{r} \leq N < 2^{r+1}$ for some $r \in \mathbb{N}.$ Then  $r = \lfloor \log_2(\, N)\, \rfloor$ and

$\#\{n < N: d(\, \alpha (\, n )\, )\, = k \} \leq \#\{n < 2^{r+1}: d(\, \alpha (\, n )\, )\, = k \} = \binom {r+1} {k + 1}.$

So, as mentioned in the previous post

$\#\{n < N: C_{n} \equiv 0 \mod {2^k} \} = N - \#\{n < N: d(\, \alpha (\, n )\, )\, < k \}$

$= N - \sum_{i = 0}^{i = k - 1} \#\{n < N: d(\, \alpha (\, n )\, )\, = i \} \geq N - \sum_{i = 0}^{i = k - 1} \binom {r+1} {i + 1}.$

And so

$\frac {1}{N} \#\{n < N: C_{n} \equiv 0 \mod {2^k} \} \geq 1 - \frac {1}{N} P_{k} (\, r )\,$

$= 1 - \frac {1}{N} P_{k} (\, \lfloor \log_2 (\, N )\, \rfloor )\,$

where $P_{k} (\, r )\,$ is a polynomial in $r$ of degree $k$ with coefficients depending on $k$. Since $k$ is fixed and $lim_{N \to \infty} \frac {1}{N} (\, log_2{N} )\, ^{k} = 0$ for fixed $k$ the second term above is zero in the limit and

$lim_{N \to \infty} \frac {1}{N} \#\{n < N: C_{n} \equiv 0 \mod {2^k} \} = 1.$

This result is not surprising given the highly composite nature of the Catalan numbers. In general we would also expect that the same result holds with $2^{k}$ replaced by any natural number.

## Summary

The Catalan numbers are defined by

$C_n = \frac{1}{n + 1} \binom {2n} {n}.$

This note will use results obtained by Liu and Yeh in [1] to obtain a formula for the number of $C_n < 2^{ t }$ which are divisible by $2^{ k }$   where   $t, k \in \mathbb{N}.$

Firstly, as in [1], let the p-adic order of a positive integer n be defined by

$\omega_p(\, n )\, := max\{\alpha \in \mathbb{N} : p^{\alpha} | n\} .$

and the cofactor of n with respect to $p^{ \omega_p(\, n )\, }$  be defined by

$CF_p(\, n )\, := \frac {n} {p^{\omega_p(\, n )\,}} .$

In addition the function $\alpha : \mathbb{N} \to \mathbb{N}$  is defined by

$\alpha (\, n )\, := \frac {CF_2(\, n + 1 )\, - 1} {2}.$

For a number p, we write the base p expansion of a number n as

$[\, n ]\,_{p} = \langle n_{r} n_{r-1} ... n_{1} n_{0} \rangle$

where $n_{i} \in [\, 0, p - 1]\,$  and $n = n_{r} p^{r} + n_{r-1} p^{r-1} + ... + n_{1} p + n_{0}.$

Then the function $d : \mathbb{N} \to \mathbb{N} := d (\, n )\, = n_{r} + n_{r-1} + ... + n_{1} + n$  is the sum of the p-adic digits of a natural number n (i.e. the sum of the digits of n when n is written in base p). Here we will only be interested in the case $p = 2$ . Then

Theorem   1:  $\#\{n < 2^t: d(\, \alpha (\, n )\, )\, = k \} = \binom {t} {k + 1}.$

Corollary   2:  $\#\{n < 2^t: C_{n} \equiv 0 \mod {2^k} \} = \sum_{i = k+1}^{i = t} \binom{t} {i}.$

Corollary   3:  $\#\{n < 2^t: C_{n} \equiv 2^{k-1} \mod {2^k} \} = \binom{t} {k}.$

## Proofs.

Proof of Theorem 1:

Let  $\alpha = \alpha (\, n )\, = \frac {CF_2(\, n + 1 )\, - 1} {2}$ .  Then

$n + 1 = 2^{s} (\, 2^{\alpha} + 1 )\,$

for some arbitrary $s \in \mathbb{N} : s \geq 0$ .  Writing $n + 1$  in base 2 we have

$[\, n + 1 ]\,_{2} = \langle [\, \alpha ]\,_{2}, 1, 0 ... 0, 0 \rangle$  where there are s  0’s at the end and $s \geq 0$  is arbitrary. So,

$[\, n ]\,_{2} = \langle [\, \alpha ]\,_{2}, 0, 1 ... 1, 1 \rangle$  where there are s  1’s  at the end and $s \geq 0$  is arbitrary. It can be seen that $d(\, n )\, = d(\, \alpha )\, + s.$

Since $n < 2^{t}$  and  $d(\, \alpha (\, n )\, )\, = k$  the possible base 2 representations of n are

$[\, n ]\,_{2} = \langle [\, \alpha ]\,_{2}, 0 \rangle$  with $\alpha < 2^{t - 1}$

$[\, n ]\,_{2} = \langle [\, \alpha ]\,_{2}, 0, 1 \rangle$  with $\alpha < 2^{t - 2}$

.

.

.

$[\, n ]\,_{2} = \langle [\, \alpha ]\,_{2}, 0, 1, 1, ... 1 \rangle$  with $\alpha < 2^{k}$  and $(\, t - k - 1)\,$   1’s at the end.

Therefore, counting each of these possibilities and using the fact that $d (\, \alpha )\, = k$  gives

$\#\{n < 2^t: d(\, \alpha (\, n )\, )\, = k \} = \sum_{i = k}^{i = t - 1} \binom{i} {k} = \binom{t} {k + 1}.$

$\Box$

## Proof of Corollary 2.

The following result appears in [1].

Theorem 4 (Corollary 4.3 of [1]): In general, we have $\omega_{2} (\, C_{n} )\, = d (\, \alpha )\, .$   In particular,  $C_{n} \equiv 0 (\, mod 2^{k} )\,$  if and only if  $d (\, \alpha )\, \geq k$ ,  and  $C_{n} \equiv 2^{k - 1} (\, mod 2^{k} )\,$  if and only if  $d (\, \alpha )\, = k - 1.$

Corollary 2 follows from Theorem 1 and Theorem 4 above since

$\#\{n < 2^t: C_{n} \equiv 0 \mod {2^k} \} = \#\{n < 2^t: d(\, \alpha (\, n )\, )\, \geq k \}$  from Theorem 4

$= 2^{t} - 1 - \sum_{i = 0}^{i = k - 1} \#\{n < 2^t: d(\, \alpha (\, n )\, )\, = i \}$

$= 2^{t} - 1 - \sum_{i = 0}^{i = k - 1} \binom{t} {i + 1}$  from Theorem 1

$= 2^{t} - \sum_{i = 0}^{i = k} \binom{t} {i}$

$= \sum_{i = k + 1}^{i = t} \binom{t} {i}$  since  $2^{t} = \sum_{i = 0}^{i = t} \binom{t} {i}$

$\Box$

## Proof of Corollary 3.

From Theorem 4,

$\#\{n < 2^t: C_{n} \equiv 2^{k-1} \mod {2^k} \} = \#\{n < 2^t: d(\, \alpha (\, n )\, )\, = k - 1 \}$

$= \binom{t} {k}$  from Theorem 1.

$\Box$

## Reference

[1]  Liu, S.-C., & Yeh, J. (2010). Catalan numbers modulo $2^k$. J Integer Sequences. Vol 13 (2010) Article 10.5.4

Until recently when I started paying more attention to my surroundings from the left side of the car, I assumed naively that most drivers obeyed speed limits.

The title is a little contentious since there are some restrictions. Obviously cars can’t speed when other cars or traffic lights get in the way. However, when there are no restrictions on speed other than the speed limit itself, it seems most drivers speed most of the time.

The proof involves sampling from a population to estimate the proportion of cars that are speeding. At each point in time each car on the road is either speeding or not speeding. I have assumed that the proportion of cars which are speeding does not change over time. This is a necessary assumption as the sample is collected at different points in time so we want to assume we are still sampling from the same population.

The sampling itself is fairly straight forward. Just jump in the car and drive (and count). Drive just above the speed limit on a road where there are no traffic lights and traffic conditions do not impede driver’s speeds. Count the number of cars you pass (non-speeders) and the number of cars that pass you (non-non-speeders). Collect enough sample points as per standard statistical theory and use the results to estimate the proportion of drivers who speed.

I have done the experiment and the result is summarised by the title of the story. It’s clear now why my dad used to say that most drivers consider the speed limit as a minimum rather than a maximum.

One of the strange characteristics of the driver population is that individual cars can change their state from one time to another by either slowing down or speeding up. This is different from the usual scenario where for example the proportion of defectives on a production line is estimated. An item that is defective stays defective. The car is like Schrödinger’s cat in the famous thought experiment.

As an addendum, I note that The Age newspaper recently (2 June 2017) published an article bearing out the above observation using a survey of drivers. See the story Motorists behaving badly

Motorists behaving badly … and they know it Speeding, texting while driving and answering hand-held mobile phones. These dangerous driving practices are rife on Victorian roads.

By making some simple assumptions we can establish an equation governing hair growth. The equations shows that generally the rate of hair growth decreases exponentially and hair length is bounded by a constant which depends on the initial assumptions.

Before beginning I need to clarify what hair length means since not all hairs have the same length. I have treated hair length as some sort of average over all hairs. The nature of the average is a little vague but hopefully won’t affect the result in any significant way.

Here are some simple assumptions which I have used to derive the equations. Let A(t) denote hair length at time t.

Assumption 1. The rate of increase in the length of each individual hair is a constant, say g. g is the same for each hair. This says that input into the ‘hair system’ is constant over time.

Assumption 2. The number of active hair follicles, say N, is fixed. This means that there is no hair loss.

Assumption 3. The number of hairs shed per unit of time , say L, is constant. The shed hairs will be replaced over time in accordance with assumption 2 above.

We derive the following first order differential equation from the above assumptions:

dA/dt = g – L*A/N

If L = 0 (no shedding of hair) then the general solution is

A = g*t + c

for some constant c. Thus the average hair length increases linearly over time without bound. If L is not zero then the general solution is

A = gN/L + cexp(-L*t/N)

for some constant c which is determined from the initial conditions. If c > 0 then average hair length decreases over time.  If c < 0 then average hair length increases over time. In both cases A approaches g*N/L as t approaches infinity and the rate at which A approaches this bound decreases exponentially.  For a fixed initial hair length and fixed N, c would be > 0 when L is large compared to g and c would be < 0 when L is small compared to g. You would expect this since if L >> g the natural hair growth rate cannot on average replace the hair that is being shed so the average length should decrease over time. Conversely if L << g then growth is outstripping shedding and the average hair length should increase over time.