Yes, I think that's what I realised my mistake was:
ri>min{ℓi,mi}, then it is greater than the smaller out of the two, m_i and l_i. So, if it is greater than m_i, x can't divide b, if it is greater than l_i, x can't divide a. Oh, yeah, I get your approach now, thank you!
EDIT: Can I do the same with part 2 of my proof:
ri<min{ℓi,mi}, then it is definitely smaller than the minimum of m_i and l_i, whichever is the smaller. So, I can say it divides a and b, because it is smaller than both m_i and l_i. I feel the mistake in my proof starts where I say r_i + 1 <= l_i. Can you confirm that? Is it that I can't assume they have a difference of 1, it may be that r < m but r = 2, m = 5? etc... Then, I'm not sure how to continue from that point.
I can only think of: If you choose e_i = min(m_i, l_i), then the product of all primes raised to the power of e_i for 1 <= i <= k, it divides both a and b, and therefore it is greater than r, so it is not the greatest common divisor?